Dec 11 08:30:20 localhost kernel: Linux version 5.14.0-648.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Fri Dec 5 11:18:23 UTC 2025
Dec 11 08:30:20 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Dec 11 08:30:20 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64 root=UUID=cbdedf45-ed1d-4952-82a8-33a12c0ba266 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec 11 08:30:20 localhost kernel: BIOS-provided physical RAM map:
Dec 11 08:30:20 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Dec 11 08:30:20 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Dec 11 08:30:20 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Dec 11 08:30:20 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Dec 11 08:30:20 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Dec 11 08:30:20 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Dec 11 08:30:20 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Dec 11 08:30:20 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Dec 11 08:30:20 localhost kernel: NX (Execute Disable) protection: active
Dec 11 08:30:20 localhost kernel: APIC: Static calls initialized
Dec 11 08:30:20 localhost kernel: SMBIOS 2.8 present.
Dec 11 08:30:20 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Dec 11 08:30:20 localhost kernel: Hypervisor detected: KVM
Dec 11 08:30:20 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Dec 11 08:30:20 localhost kernel: kvm-clock: using sched offset of 4936102030 cycles
Dec 11 08:30:20 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Dec 11 08:30:20 localhost kernel: tsc: Detected 2800.000 MHz processor
Dec 11 08:30:20 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Dec 11 08:30:20 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Dec 11 08:30:20 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Dec 11 08:30:20 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Dec 11 08:30:20 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Dec 11 08:30:20 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Dec 11 08:30:20 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Dec 11 08:30:20 localhost kernel: Using GB pages for direct mapping
Dec 11 08:30:20 localhost kernel: RAMDISK: [mem 0x2d46a000-0x32a2cfff]
Dec 11 08:30:20 localhost kernel: ACPI: Early table checksum verification disabled
Dec 11 08:30:20 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Dec 11 08:30:20 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 11 08:30:20 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 11 08:30:20 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 11 08:30:20 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Dec 11 08:30:20 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 11 08:30:20 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 11 08:30:20 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Dec 11 08:30:20 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Dec 11 08:30:20 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Dec 11 08:30:20 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Dec 11 08:30:20 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Dec 11 08:30:20 localhost kernel: No NUMA configuration found
Dec 11 08:30:20 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Dec 11 08:30:20 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Dec 11 08:30:20 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Dec 11 08:30:20 localhost kernel: Zone ranges:
Dec 11 08:30:20 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Dec 11 08:30:20 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Dec 11 08:30:20 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Dec 11 08:30:20 localhost kernel:   Device   empty
Dec 11 08:30:20 localhost kernel: Movable zone start for each node
Dec 11 08:30:20 localhost kernel: Early memory node ranges
Dec 11 08:30:20 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Dec 11 08:30:20 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Dec 11 08:30:20 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Dec 11 08:30:20 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Dec 11 08:30:20 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Dec 11 08:30:20 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Dec 11 08:30:20 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Dec 11 08:30:20 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Dec 11 08:30:20 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Dec 11 08:30:20 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Dec 11 08:30:20 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Dec 11 08:30:20 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Dec 11 08:30:20 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Dec 11 08:30:20 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Dec 11 08:30:20 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Dec 11 08:30:20 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Dec 11 08:30:20 localhost kernel: TSC deadline timer available
Dec 11 08:30:20 localhost kernel: CPU topo: Max. logical packages:   8
Dec 11 08:30:20 localhost kernel: CPU topo: Max. logical dies:       8
Dec 11 08:30:20 localhost kernel: CPU topo: Max. dies per package:   1
Dec 11 08:30:20 localhost kernel: CPU topo: Max. threads per core:   1
Dec 11 08:30:20 localhost kernel: CPU topo: Num. cores per package:     1
Dec 11 08:30:20 localhost kernel: CPU topo: Num. threads per package:   1
Dec 11 08:30:20 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Dec 11 08:30:20 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Dec 11 08:30:20 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Dec 11 08:30:20 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Dec 11 08:30:20 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Dec 11 08:30:20 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Dec 11 08:30:20 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Dec 11 08:30:20 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Dec 11 08:30:20 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Dec 11 08:30:20 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Dec 11 08:30:20 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Dec 11 08:30:20 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Dec 11 08:30:20 localhost kernel: Booting paravirtualized kernel on KVM
Dec 11 08:30:20 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Dec 11 08:30:20 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Dec 11 08:30:20 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Dec 11 08:30:20 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Dec 11 08:30:20 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Dec 11 08:30:20 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Dec 11 08:30:20 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64 root=UUID=cbdedf45-ed1d-4952-82a8-33a12c0ba266 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec 11 08:30:20 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64", will be passed to user space.
Dec 11 08:30:20 localhost kernel: random: crng init done
Dec 11 08:30:20 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Dec 11 08:30:20 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Dec 11 08:30:20 localhost kernel: Fallback order for Node 0: 0 
Dec 11 08:30:20 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Dec 11 08:30:20 localhost kernel: Policy zone: Normal
Dec 11 08:30:20 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Dec 11 08:30:20 localhost kernel: software IO TLB: area num 8.
Dec 11 08:30:20 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Dec 11 08:30:20 localhost kernel: ftrace: allocating 49357 entries in 193 pages
Dec 11 08:30:20 localhost kernel: ftrace: allocated 193 pages with 3 groups
Dec 11 08:30:20 localhost kernel: Dynamic Preempt: voluntary
Dec 11 08:30:20 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Dec 11 08:30:20 localhost kernel: rcu:         RCU event tracing is enabled.
Dec 11 08:30:20 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Dec 11 08:30:20 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Dec 11 08:30:20 localhost kernel:         Rude variant of Tasks RCU enabled.
Dec 11 08:30:20 localhost kernel:         Tracing variant of Tasks RCU enabled.
Dec 11 08:30:20 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Dec 11 08:30:20 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Dec 11 08:30:20 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec 11 08:30:20 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec 11 08:30:20 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec 11 08:30:20 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Dec 11 08:30:20 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Dec 11 08:30:20 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Dec 11 08:30:20 localhost kernel: Console: colour VGA+ 80x25
Dec 11 08:30:20 localhost kernel: printk: console [ttyS0] enabled
Dec 11 08:30:20 localhost kernel: ACPI: Core revision 20230331
Dec 11 08:30:20 localhost kernel: APIC: Switch to symmetric I/O mode setup
Dec 11 08:30:20 localhost kernel: x2apic enabled
Dec 11 08:30:20 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Dec 11 08:30:20 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Dec 11 08:30:20 localhost kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Dec 11 08:30:20 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Dec 11 08:30:20 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Dec 11 08:30:20 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Dec 11 08:30:20 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Dec 11 08:30:20 localhost kernel: Spectre V2 : Mitigation: Retpolines
Dec 11 08:30:20 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Dec 11 08:30:20 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Dec 11 08:30:20 localhost kernel: RETBleed: Mitigation: untrained return thunk
Dec 11 08:30:20 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Dec 11 08:30:20 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Dec 11 08:30:20 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Dec 11 08:30:20 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Dec 11 08:30:20 localhost kernel: x86/bugs: return thunk changed
Dec 11 08:30:20 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Dec 11 08:30:20 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Dec 11 08:30:20 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Dec 11 08:30:20 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Dec 11 08:30:20 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Dec 11 08:30:20 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Dec 11 08:30:20 localhost kernel: Freeing SMP alternatives memory: 40K
Dec 11 08:30:20 localhost kernel: pid_max: default: 32768 minimum: 301
Dec 11 08:30:20 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Dec 11 08:30:20 localhost kernel: landlock: Up and running.
Dec 11 08:30:20 localhost kernel: Yama: becoming mindful.
Dec 11 08:30:20 localhost kernel: SELinux:  Initializing.
Dec 11 08:30:20 localhost kernel: LSM support for eBPF active
Dec 11 08:30:20 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec 11 08:30:20 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec 11 08:30:20 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Dec 11 08:30:20 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Dec 11 08:30:20 localhost kernel: ... version:                0
Dec 11 08:30:20 localhost kernel: ... bit width:              48
Dec 11 08:30:20 localhost kernel: ... generic registers:      6
Dec 11 08:30:20 localhost kernel: ... value mask:             0000ffffffffffff
Dec 11 08:30:20 localhost kernel: ... max period:             00007fffffffffff
Dec 11 08:30:20 localhost kernel: ... fixed-purpose events:   0
Dec 11 08:30:20 localhost kernel: ... event mask:             000000000000003f
Dec 11 08:30:20 localhost kernel: signal: max sigframe size: 1776
Dec 11 08:30:20 localhost kernel: rcu: Hierarchical SRCU implementation.
Dec 11 08:30:20 localhost kernel: rcu:         Max phase no-delay instances is 400.
Dec 11 08:30:20 localhost kernel: smp: Bringing up secondary CPUs ...
Dec 11 08:30:20 localhost kernel: smpboot: x86: Booting SMP configuration:
Dec 11 08:30:20 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Dec 11 08:30:20 localhost kernel: smp: Brought up 1 node, 8 CPUs
Dec 11 08:30:20 localhost kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Dec 11 08:30:20 localhost kernel: node 0 deferred pages initialised in 116ms
Dec 11 08:30:20 localhost kernel: Memory: 7763892K/8388068K available (16384K kernel code, 5795K rwdata, 13916K rodata, 4192K init, 7164K bss, 618228K reserved, 0K cma-reserved)
Dec 11 08:30:20 localhost kernel: devtmpfs: initialized
Dec 11 08:30:20 localhost kernel: x86/mm: Memory block size: 128MB
Dec 11 08:30:20 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Dec 11 08:30:20 localhost kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Dec 11 08:30:20 localhost kernel: pinctrl core: initialized pinctrl subsystem
Dec 11 08:30:20 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Dec 11 08:30:20 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Dec 11 08:30:20 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Dec 11 08:30:20 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Dec 11 08:30:20 localhost kernel: audit: initializing netlink subsys (disabled)
Dec 11 08:30:20 localhost kernel: audit: type=2000 audit(1765441816.837:1): state=initialized audit_enabled=0 res=1
Dec 11 08:30:20 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Dec 11 08:30:20 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Dec 11 08:30:20 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Dec 11 08:30:20 localhost kernel: cpuidle: using governor menu
Dec 11 08:30:20 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Dec 11 08:30:20 localhost kernel: PCI: Using configuration type 1 for base access
Dec 11 08:30:20 localhost kernel: PCI: Using configuration type 1 for extended access
Dec 11 08:30:20 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Dec 11 08:30:20 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Dec 11 08:30:20 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Dec 11 08:30:20 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Dec 11 08:30:20 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Dec 11 08:30:20 localhost kernel: Demotion targets for Node 0: null
Dec 11 08:30:20 localhost kernel: cryptd: max_cpu_qlen set to 1000
Dec 11 08:30:20 localhost kernel: ACPI: Added _OSI(Module Device)
Dec 11 08:30:20 localhost kernel: ACPI: Added _OSI(Processor Device)
Dec 11 08:30:20 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Dec 11 08:30:20 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Dec 11 08:30:20 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Dec 11 08:30:20 localhost kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Dec 11 08:30:20 localhost kernel: ACPI: Interpreter enabled
Dec 11 08:30:20 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Dec 11 08:30:20 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Dec 11 08:30:20 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Dec 11 08:30:20 localhost kernel: PCI: Using E820 reservations for host bridge windows
Dec 11 08:30:20 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Dec 11 08:30:20 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Dec 11 08:30:20 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Dec 11 08:30:20 localhost kernel: acpiphp: Slot [3] registered
Dec 11 08:30:20 localhost kernel: acpiphp: Slot [4] registered
Dec 11 08:30:20 localhost kernel: acpiphp: Slot [5] registered
Dec 11 08:30:20 localhost kernel: acpiphp: Slot [6] registered
Dec 11 08:30:20 localhost kernel: acpiphp: Slot [7] registered
Dec 11 08:30:20 localhost kernel: acpiphp: Slot [8] registered
Dec 11 08:30:20 localhost kernel: acpiphp: Slot [9] registered
Dec 11 08:30:20 localhost kernel: acpiphp: Slot [10] registered
Dec 11 08:30:20 localhost kernel: acpiphp: Slot [11] registered
Dec 11 08:30:20 localhost kernel: acpiphp: Slot [12] registered
Dec 11 08:30:20 localhost kernel: acpiphp: Slot [13] registered
Dec 11 08:30:20 localhost kernel: acpiphp: Slot [14] registered
Dec 11 08:30:20 localhost kernel: acpiphp: Slot [15] registered
Dec 11 08:30:20 localhost kernel: acpiphp: Slot [16] registered
Dec 11 08:30:20 localhost kernel: acpiphp: Slot [17] registered
Dec 11 08:30:20 localhost kernel: acpiphp: Slot [18] registered
Dec 11 08:30:20 localhost kernel: acpiphp: Slot [19] registered
Dec 11 08:30:20 localhost kernel: acpiphp: Slot [20] registered
Dec 11 08:30:20 localhost kernel: acpiphp: Slot [21] registered
Dec 11 08:30:20 localhost kernel: acpiphp: Slot [22] registered
Dec 11 08:30:20 localhost kernel: acpiphp: Slot [23] registered
Dec 11 08:30:20 localhost kernel: acpiphp: Slot [24] registered
Dec 11 08:30:20 localhost kernel: acpiphp: Slot [25] registered
Dec 11 08:30:20 localhost kernel: acpiphp: Slot [26] registered
Dec 11 08:30:20 localhost kernel: acpiphp: Slot [27] registered
Dec 11 08:30:20 localhost kernel: acpiphp: Slot [28] registered
Dec 11 08:30:20 localhost kernel: acpiphp: Slot [29] registered
Dec 11 08:30:20 localhost kernel: acpiphp: Slot [30] registered
Dec 11 08:30:20 localhost kernel: acpiphp: Slot [31] registered
Dec 11 08:30:20 localhost kernel: PCI host bridge to bus 0000:00
Dec 11 08:30:20 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Dec 11 08:30:20 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Dec 11 08:30:20 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Dec 11 08:30:20 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Dec 11 08:30:20 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Dec 11 08:30:20 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Dec 11 08:30:20 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Dec 11 08:30:20 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Dec 11 08:30:20 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Dec 11 08:30:20 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Dec 11 08:30:20 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Dec 11 08:30:20 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Dec 11 08:30:20 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Dec 11 08:30:20 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Dec 11 08:30:20 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Dec 11 08:30:20 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Dec 11 08:30:20 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Dec 11 08:30:20 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Dec 11 08:30:20 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Dec 11 08:30:20 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Dec 11 08:30:20 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Dec 11 08:30:20 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Dec 11 08:30:20 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Dec 11 08:30:20 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Dec 11 08:30:20 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Dec 11 08:30:20 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec 11 08:30:20 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Dec 11 08:30:20 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Dec 11 08:30:20 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Dec 11 08:30:20 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Dec 11 08:30:20 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Dec 11 08:30:20 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Dec 11 08:30:20 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Dec 11 08:30:20 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Dec 11 08:30:20 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Dec 11 08:30:20 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Dec 11 08:30:20 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Dec 11 08:30:20 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Dec 11 08:30:20 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Dec 11 08:30:20 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Dec 11 08:30:20 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Dec 11 08:30:20 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Dec 11 08:30:20 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Dec 11 08:30:20 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Dec 11 08:30:20 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Dec 11 08:30:20 localhost kernel: iommu: Default domain type: Translated
Dec 11 08:30:20 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Dec 11 08:30:20 localhost kernel: SCSI subsystem initialized
Dec 11 08:30:20 localhost kernel: ACPI: bus type USB registered
Dec 11 08:30:20 localhost kernel: usbcore: registered new interface driver usbfs
Dec 11 08:30:20 localhost kernel: usbcore: registered new interface driver hub
Dec 11 08:30:20 localhost kernel: usbcore: registered new device driver usb
Dec 11 08:30:20 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Dec 11 08:30:20 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Dec 11 08:30:20 localhost kernel: PTP clock support registered
Dec 11 08:30:20 localhost kernel: EDAC MC: Ver: 3.0.0
Dec 11 08:30:20 localhost kernel: NetLabel: Initializing
Dec 11 08:30:20 localhost kernel: NetLabel:  domain hash size = 128
Dec 11 08:30:20 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Dec 11 08:30:20 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Dec 11 08:30:20 localhost kernel: PCI: Using ACPI for IRQ routing
Dec 11 08:30:20 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Dec 11 08:30:20 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Dec 11 08:30:20 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Dec 11 08:30:20 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Dec 11 08:30:20 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Dec 11 08:30:20 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Dec 11 08:30:20 localhost kernel: vgaarb: loaded
Dec 11 08:30:20 localhost kernel: clocksource: Switched to clocksource kvm-clock
Dec 11 08:30:20 localhost kernel: VFS: Disk quotas dquot_6.6.0
Dec 11 08:30:20 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Dec 11 08:30:20 localhost kernel: pnp: PnP ACPI init
Dec 11 08:30:20 localhost kernel: pnp 00:03: [dma 2]
Dec 11 08:30:20 localhost kernel: pnp: PnP ACPI: found 5 devices
Dec 11 08:30:20 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Dec 11 08:30:20 localhost kernel: NET: Registered PF_INET protocol family
Dec 11 08:30:20 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Dec 11 08:30:20 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Dec 11 08:30:20 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Dec 11 08:30:20 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Dec 11 08:30:20 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Dec 11 08:30:20 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Dec 11 08:30:20 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Dec 11 08:30:20 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec 11 08:30:20 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec 11 08:30:20 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Dec 11 08:30:20 localhost kernel: NET: Registered PF_XDP protocol family
Dec 11 08:30:20 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Dec 11 08:30:20 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Dec 11 08:30:20 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Dec 11 08:30:20 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Dec 11 08:30:20 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Dec 11 08:30:20 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Dec 11 08:30:20 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Dec 11 08:30:20 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Dec 11 08:30:20 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 78979 usecs
Dec 11 08:30:20 localhost kernel: PCI: CLS 0 bytes, default 64
Dec 11 08:30:20 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Dec 11 08:30:20 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Dec 11 08:30:20 localhost kernel: Trying to unpack rootfs image as initramfs...
Dec 11 08:30:20 localhost kernel: ACPI: bus type thunderbolt registered
Dec 11 08:30:20 localhost kernel: Initialise system trusted keyrings
Dec 11 08:30:20 localhost kernel: Key type blacklist registered
Dec 11 08:30:20 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Dec 11 08:30:20 localhost kernel: zbud: loaded
Dec 11 08:30:20 localhost kernel: integrity: Platform Keyring initialized
Dec 11 08:30:20 localhost kernel: integrity: Machine keyring initialized
Dec 11 08:30:20 localhost kernel: Freeing initrd memory: 87820K
Dec 11 08:30:20 localhost kernel: NET: Registered PF_ALG protocol family
Dec 11 08:30:20 localhost kernel: xor: automatically using best checksumming function   avx       
Dec 11 08:30:20 localhost kernel: Key type asymmetric registered
Dec 11 08:30:20 localhost kernel: Asymmetric key parser 'x509' registered
Dec 11 08:30:20 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Dec 11 08:30:20 localhost kernel: io scheduler mq-deadline registered
Dec 11 08:30:20 localhost kernel: io scheduler kyber registered
Dec 11 08:30:20 localhost kernel: io scheduler bfq registered
Dec 11 08:30:20 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Dec 11 08:30:20 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Dec 11 08:30:20 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Dec 11 08:30:20 localhost kernel: ACPI: button: Power Button [PWRF]
Dec 11 08:30:20 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Dec 11 08:30:20 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Dec 11 08:30:20 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Dec 11 08:30:20 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Dec 11 08:30:20 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Dec 11 08:30:20 localhost kernel: Non-volatile memory driver v1.3
Dec 11 08:30:20 localhost kernel: rdac: device handler registered
Dec 11 08:30:20 localhost kernel: hp_sw: device handler registered
Dec 11 08:30:20 localhost kernel: emc: device handler registered
Dec 11 08:30:20 localhost kernel: alua: device handler registered
Dec 11 08:30:20 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Dec 11 08:30:20 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Dec 11 08:30:20 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Dec 11 08:30:20 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Dec 11 08:30:20 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Dec 11 08:30:20 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Dec 11 08:30:20 localhost kernel: usb usb1: Product: UHCI Host Controller
Dec 11 08:30:20 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-648.el9.x86_64 uhci_hcd
Dec 11 08:30:20 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Dec 11 08:30:20 localhost kernel: hub 1-0:1.0: USB hub found
Dec 11 08:30:20 localhost kernel: hub 1-0:1.0: 2 ports detected
Dec 11 08:30:20 localhost kernel: usbcore: registered new interface driver usbserial_generic
Dec 11 08:30:20 localhost kernel: usbserial: USB Serial support registered for generic
Dec 11 08:30:20 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Dec 11 08:30:20 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Dec 11 08:30:20 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Dec 11 08:30:20 localhost kernel: mousedev: PS/2 mouse device common for all mice
Dec 11 08:30:20 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Dec 11 08:30:20 localhost kernel: rtc_cmos 00:04: registered as rtc0
Dec 11 08:30:20 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-12-11T08:30:19 UTC (1765441819)
Dec 11 08:30:20 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Dec 11 08:30:20 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Dec 11 08:30:20 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Dec 11 08:30:20 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Dec 11 08:30:20 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Dec 11 08:30:20 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Dec 11 08:30:20 localhost kernel: usbcore: registered new interface driver usbhid
Dec 11 08:30:20 localhost kernel: usbhid: USB HID core driver
Dec 11 08:30:20 localhost kernel: drop_monitor: Initializing network drop monitor service
Dec 11 08:30:20 localhost kernel: Initializing XFRM netlink socket
Dec 11 08:30:20 localhost kernel: NET: Registered PF_INET6 protocol family
Dec 11 08:30:20 localhost kernel: Segment Routing with IPv6
Dec 11 08:30:20 localhost kernel: NET: Registered PF_PACKET protocol family
Dec 11 08:30:20 localhost kernel: mpls_gso: MPLS GSO support
Dec 11 08:30:20 localhost kernel: IPI shorthand broadcast: enabled
Dec 11 08:30:20 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Dec 11 08:30:20 localhost kernel: AES CTR mode by8 optimization enabled
Dec 11 08:30:20 localhost kernel: sched_clock: Marking stable (3437002855, 148608277)->(3958410706, -372799574)
Dec 11 08:30:20 localhost kernel: registered taskstats version 1
Dec 11 08:30:20 localhost kernel: Loading compiled-in X.509 certificates
Dec 11 08:30:20 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: bcc7fcdcfd9be61e8634554e9f7a1c01f32489d8'
Dec 11 08:30:20 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Dec 11 08:30:20 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Dec 11 08:30:20 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Dec 11 08:30:20 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Dec 11 08:30:20 localhost kernel: Demotion targets for Node 0: null
Dec 11 08:30:20 localhost kernel: page_owner is disabled
Dec 11 08:30:20 localhost kernel: Key type .fscrypt registered
Dec 11 08:30:20 localhost kernel: Key type fscrypt-provisioning registered
Dec 11 08:30:20 localhost kernel: Key type big_key registered
Dec 11 08:30:20 localhost kernel: Key type encrypted registered
Dec 11 08:30:20 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Dec 11 08:30:20 localhost kernel: Loading compiled-in module X.509 certificates
Dec 11 08:30:20 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: bcc7fcdcfd9be61e8634554e9f7a1c01f32489d8'
Dec 11 08:30:20 localhost kernel: ima: Allocated hash algorithm: sha256
Dec 11 08:30:20 localhost kernel: ima: No architecture policies found
Dec 11 08:30:20 localhost kernel: evm: Initialising EVM extended attributes:
Dec 11 08:30:20 localhost kernel: evm: security.selinux
Dec 11 08:30:20 localhost kernel: evm: security.SMACK64 (disabled)
Dec 11 08:30:20 localhost kernel: evm: security.SMACK64EXEC (disabled)
Dec 11 08:30:20 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Dec 11 08:30:20 localhost kernel: evm: security.SMACK64MMAP (disabled)
Dec 11 08:30:20 localhost kernel: evm: security.apparmor (disabled)
Dec 11 08:30:20 localhost kernel: evm: security.ima
Dec 11 08:30:20 localhost kernel: evm: security.capability
Dec 11 08:30:20 localhost kernel: evm: HMAC attrs: 0x1
Dec 11 08:30:20 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Dec 11 08:30:20 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Dec 11 08:30:20 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Dec 11 08:30:20 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Dec 11 08:30:20 localhost kernel: usb 1-1: Manufacturer: QEMU
Dec 11 08:30:20 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Dec 11 08:30:20 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Dec 11 08:30:20 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Dec 11 08:30:20 localhost kernel: Running certificate verification RSA selftest
Dec 11 08:30:20 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Dec 11 08:30:20 localhost kernel: Running certificate verification ECDSA selftest
Dec 11 08:30:20 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Dec 11 08:30:20 localhost kernel: clk: Disabling unused clocks
Dec 11 08:30:20 localhost kernel: Freeing unused decrypted memory: 2028K
Dec 11 08:30:20 localhost kernel: Freeing unused kernel image (initmem) memory: 4192K
Dec 11 08:30:20 localhost kernel: Write protecting the kernel read-only data: 30720k
Dec 11 08:30:20 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 420K
Dec 11 08:30:20 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Dec 11 08:30:20 localhost kernel: Run /init as init process
Dec 11 08:30:20 localhost kernel:   with arguments:
Dec 11 08:30:20 localhost kernel:     /init
Dec 11 08:30:20 localhost kernel:   with environment:
Dec 11 08:30:20 localhost kernel:     HOME=/
Dec 11 08:30:20 localhost kernel:     TERM=linux
Dec 11 08:30:20 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64
Dec 11 08:30:20 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec 11 08:30:20 localhost systemd[1]: Detected virtualization kvm.
Dec 11 08:30:20 localhost systemd[1]: Detected architecture x86-64.
Dec 11 08:30:20 localhost systemd[1]: Running in initrd.
Dec 11 08:30:20 localhost systemd[1]: No hostname configured, using default hostname.
Dec 11 08:30:20 localhost systemd[1]: Hostname set to <localhost>.
Dec 11 08:30:20 localhost systemd[1]: Initializing machine ID from VM UUID.
Dec 11 08:30:20 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Dec 11 08:30:20 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Dec 11 08:30:20 localhost systemd[1]: Reached target Local Encrypted Volumes.
Dec 11 08:30:20 localhost systemd[1]: Reached target Initrd /usr File System.
Dec 11 08:30:20 localhost systemd[1]: Reached target Local File Systems.
Dec 11 08:30:20 localhost systemd[1]: Reached target Path Units.
Dec 11 08:30:20 localhost systemd[1]: Reached target Slice Units.
Dec 11 08:30:20 localhost systemd[1]: Reached target Swaps.
Dec 11 08:30:20 localhost systemd[1]: Reached target Timer Units.
Dec 11 08:30:20 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Dec 11 08:30:20 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Dec 11 08:30:20 localhost systemd[1]: Listening on Journal Socket.
Dec 11 08:30:20 localhost systemd[1]: Listening on udev Control Socket.
Dec 11 08:30:20 localhost systemd[1]: Listening on udev Kernel Socket.
Dec 11 08:30:20 localhost systemd[1]: Reached target Socket Units.
Dec 11 08:30:20 localhost systemd[1]: Starting Create List of Static Device Nodes...
Dec 11 08:30:20 localhost systemd[1]: Starting Journal Service...
Dec 11 08:30:20 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec 11 08:30:20 localhost systemd[1]: Starting Apply Kernel Variables...
Dec 11 08:30:20 localhost systemd[1]: Starting Create System Users...
Dec 11 08:30:20 localhost systemd[1]: Starting Setup Virtual Console...
Dec 11 08:30:20 localhost systemd[1]: Finished Create List of Static Device Nodes.
Dec 11 08:30:20 localhost systemd[1]: Finished Apply Kernel Variables.
Dec 11 08:30:20 localhost systemd[1]: Finished Create System Users.
Dec 11 08:30:20 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Dec 11 08:30:20 localhost systemd-journald[313]: Journal started
Dec 11 08:30:20 localhost systemd-journald[313]: Runtime Journal (/run/log/journal/8d06aa111f3848c28f1e42d6d242dd05) is 8.0M, max 153.6M, 145.6M free.
Dec 11 08:30:20 localhost systemd-sysusers[317]: Creating group 'users' with GID 100.
Dec 11 08:30:20 localhost systemd-sysusers[317]: Creating group 'dbus' with GID 81.
Dec 11 08:30:20 localhost systemd-sysusers[317]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Dec 11 08:30:20 localhost systemd[1]: Started Journal Service.
Dec 11 08:30:20 localhost systemd[1]: Starting Create Volatile Files and Directories...
Dec 11 08:30:20 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Dec 11 08:30:20 localhost systemd[1]: Finished Create Volatile Files and Directories.
Dec 11 08:30:20 localhost systemd[1]: Finished Setup Virtual Console.
Dec 11 08:30:20 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Dec 11 08:30:20 localhost systemd[1]: Starting dracut cmdline hook...
Dec 11 08:30:20 localhost dracut-cmdline[333]: dracut-9 dracut-057-102.git20250818.el9
Dec 11 08:30:20 localhost dracut-cmdline[333]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64 root=UUID=cbdedf45-ed1d-4952-82a8-33a12c0ba266 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec 11 08:30:20 localhost systemd[1]: Finished dracut cmdline hook.
Dec 11 08:30:20 localhost systemd[1]: Starting dracut pre-udev hook...
Dec 11 08:30:20 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Dec 11 08:30:20 localhost kernel: device-mapper: uevent: version 1.0.3
Dec 11 08:30:20 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Dec 11 08:30:21 localhost kernel: RPC: Registered named UNIX socket transport module.
Dec 11 08:30:21 localhost kernel: RPC: Registered udp transport module.
Dec 11 08:30:21 localhost kernel: RPC: Registered tcp transport module.
Dec 11 08:30:21 localhost kernel: RPC: Registered tcp-with-tls transport module.
Dec 11 08:30:21 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Dec 11 08:30:21 localhost rpc.statd[451]: Version 2.5.4 starting
Dec 11 08:30:21 localhost rpc.statd[451]: Initializing NSM state
Dec 11 08:30:21 localhost rpc.idmapd[456]: Setting log level to 0
Dec 11 08:30:21 localhost systemd[1]: Finished dracut pre-udev hook.
Dec 11 08:30:21 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec 11 08:30:21 localhost systemd-udevd[469]: Using default interface naming scheme 'rhel-9.0'.
Dec 11 08:30:21 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec 11 08:30:21 localhost systemd[1]: Starting dracut pre-trigger hook...
Dec 11 08:30:21 localhost systemd[1]: Finished dracut pre-trigger hook.
Dec 11 08:30:21 localhost systemd[1]: Starting Coldplug All udev Devices...
Dec 11 08:30:21 localhost systemd[1]: Created slice Slice /system/modprobe.
Dec 11 08:30:21 localhost systemd[1]: Starting Load Kernel Module configfs...
Dec 11 08:30:21 localhost systemd[1]: Finished Coldplug All udev Devices.
Dec 11 08:30:21 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec 11 08:30:21 localhost systemd[1]: Reached target Network.
Dec 11 08:30:21 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec 11 08:30:21 localhost systemd[1]: Starting dracut initqueue hook...
Dec 11 08:30:21 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 11 08:30:21 localhost systemd[1]: Finished Load Kernel Module configfs.
Dec 11 08:30:21 localhost systemd[1]: Mounting Kernel Configuration File System...
Dec 11 08:30:21 localhost systemd[1]: Mounted Kernel Configuration File System.
Dec 11 08:30:21 localhost systemd[1]: Reached target System Initialization.
Dec 11 08:30:21 localhost systemd[1]: Reached target Basic System.
Dec 11 08:30:21 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Dec 11 08:30:21 localhost kernel: libata version 3.00 loaded.
Dec 11 08:30:21 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Dec 11 08:30:21 localhost kernel: scsi host0: ata_piix
Dec 11 08:30:21 localhost kernel: scsi host1: ata_piix
Dec 11 08:30:21 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Dec 11 08:30:21 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Dec 11 08:30:21 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Dec 11 08:30:21 localhost kernel:  vda: vda1
Dec 11 08:30:21 localhost kernel: ata1: found unknown device (class 0)
Dec 11 08:30:21 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Dec 11 08:30:21 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Dec 11 08:30:21 localhost systemd-udevd[482]: Network interface NamePolicy= disabled on kernel command line.
Dec 11 08:30:21 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Dec 11 08:30:21 localhost systemd[1]: Found device /dev/disk/by-uuid/cbdedf45-ed1d-4952-82a8-33a12c0ba266.
Dec 11 08:30:21 localhost systemd[1]: Reached target Initrd Root Device.
Dec 11 08:30:21 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Dec 11 08:30:21 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Dec 11 08:30:21 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Dec 11 08:30:21 localhost systemd[1]: Finished dracut initqueue hook.
Dec 11 08:30:21 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Dec 11 08:30:21 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Dec 11 08:30:21 localhost systemd[1]: Reached target Remote File Systems.
Dec 11 08:30:21 localhost systemd[1]: Starting dracut pre-mount hook...
Dec 11 08:30:21 localhost systemd[1]: Finished dracut pre-mount hook.
Dec 11 08:30:21 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/cbdedf45-ed1d-4952-82a8-33a12c0ba266...
Dec 11 08:30:21 localhost systemd-fsck[563]: /usr/sbin/fsck.xfs: XFS file system.
Dec 11 08:30:21 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/cbdedf45-ed1d-4952-82a8-33a12c0ba266.
Dec 11 08:30:21 localhost systemd[1]: Mounting /sysroot...
Dec 11 08:30:22 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Dec 11 08:30:22 localhost kernel: XFS (vda1): Mounting V5 Filesystem cbdedf45-ed1d-4952-82a8-33a12c0ba266
Dec 11 08:30:22 localhost kernel: XFS (vda1): Ending clean mount
Dec 11 08:30:22 localhost systemd[1]: Mounted /sysroot.
Dec 11 08:30:22 localhost systemd[1]: Reached target Initrd Root File System.
Dec 11 08:30:22 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Dec 11 08:30:22 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Dec 11 08:30:22 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Dec 11 08:30:22 localhost systemd[1]: Reached target Initrd File Systems.
Dec 11 08:30:22 localhost systemd[1]: Reached target Initrd Default Target.
Dec 11 08:30:22 localhost systemd[1]: Starting dracut mount hook...
Dec 11 08:30:22 localhost systemd[1]: Finished dracut mount hook.
Dec 11 08:30:22 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Dec 11 08:30:22 localhost rpc.idmapd[456]: exiting on signal 15
Dec 11 08:30:22 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Dec 11 08:30:22 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Dec 11 08:30:22 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Dec 11 08:30:22 localhost systemd[1]: Stopped target Network.
Dec 11 08:30:22 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Dec 11 08:30:22 localhost systemd[1]: Stopped target Timer Units.
Dec 11 08:30:22 localhost systemd[1]: dbus.socket: Deactivated successfully.
Dec 11 08:30:22 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Dec 11 08:30:22 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Dec 11 08:30:22 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Dec 11 08:30:22 localhost systemd[1]: Stopped target Initrd Default Target.
Dec 11 08:30:22 localhost systemd[1]: Stopped target Basic System.
Dec 11 08:30:22 localhost systemd[1]: Stopped target Initrd Root Device.
Dec 11 08:30:22 localhost systemd[1]: Stopped target Initrd /usr File System.
Dec 11 08:30:22 localhost systemd[1]: Stopped target Path Units.
Dec 11 08:30:22 localhost systemd[1]: Stopped target Remote File Systems.
Dec 11 08:30:22 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Dec 11 08:30:22 localhost systemd[1]: Stopped target Slice Units.
Dec 11 08:30:22 localhost systemd[1]: Stopped target Socket Units.
Dec 11 08:30:22 localhost systemd[1]: Stopped target System Initialization.
Dec 11 08:30:22 localhost systemd[1]: Stopped target Local File Systems.
Dec 11 08:30:22 localhost systemd[1]: Stopped target Swaps.
Dec 11 08:30:22 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Dec 11 08:30:22 localhost systemd[1]: Stopped dracut mount hook.
Dec 11 08:30:22 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Dec 11 08:30:22 localhost systemd[1]: Stopped dracut pre-mount hook.
Dec 11 08:30:22 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Dec 11 08:30:22 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Dec 11 08:30:22 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Dec 11 08:30:22 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Dec 11 08:30:22 localhost systemd[1]: Stopped dracut initqueue hook.
Dec 11 08:30:22 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec 11 08:30:22 localhost systemd[1]: Stopped Apply Kernel Variables.
Dec 11 08:30:22 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Dec 11 08:30:22 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Dec 11 08:30:22 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Dec 11 08:30:22 localhost systemd[1]: Stopped Coldplug All udev Devices.
Dec 11 08:30:22 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Dec 11 08:30:22 localhost systemd[1]: Stopped dracut pre-trigger hook.
Dec 11 08:30:22 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Dec 11 08:30:22 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Dec 11 08:30:22 localhost systemd[1]: Stopped Setup Virtual Console.
Dec 11 08:30:22 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Dec 11 08:30:22 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec 11 08:30:22 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Dec 11 08:30:22 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Dec 11 08:30:22 localhost systemd[1]: systemd-udevd.service: Consumed 1.402s CPU time.
Dec 11 08:30:22 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Dec 11 08:30:22 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Dec 11 08:30:22 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Dec 11 08:30:22 localhost systemd[1]: Closed udev Control Socket.
Dec 11 08:30:22 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Dec 11 08:30:22 localhost systemd[1]: Closed udev Kernel Socket.
Dec 11 08:30:22 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Dec 11 08:30:22 localhost systemd[1]: Stopped dracut pre-udev hook.
Dec 11 08:30:22 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Dec 11 08:30:22 localhost systemd[1]: Stopped dracut cmdline hook.
Dec 11 08:30:22 localhost systemd[1]: Starting Cleanup udev Database...
Dec 11 08:30:22 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Dec 11 08:30:22 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Dec 11 08:30:22 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Dec 11 08:30:22 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Dec 11 08:30:22 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Dec 11 08:30:22 localhost systemd[1]: Stopped Create System Users.
Dec 11 08:30:22 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Dec 11 08:30:22 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Dec 11 08:30:22 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Dec 11 08:30:22 localhost systemd[1]: Finished Cleanup udev Database.
Dec 11 08:30:22 localhost systemd[1]: Reached target Switch Root.
Dec 11 08:30:22 localhost systemd[1]: Starting Switch Root...
Dec 11 08:30:22 localhost systemd[1]: Switching root.
Dec 11 08:30:22 localhost systemd-journald[313]: Journal stopped
Dec 11 08:30:24 localhost systemd-journald[313]: Received SIGTERM from PID 1 (systemd).
Dec 11 08:30:24 localhost kernel: audit: type=1404 audit(1765441823.056:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Dec 11 08:30:24 localhost kernel: SELinux:  policy capability network_peer_controls=1
Dec 11 08:30:24 localhost kernel: SELinux:  policy capability open_perms=1
Dec 11 08:30:24 localhost kernel: SELinux:  policy capability extended_socket_class=1
Dec 11 08:30:24 localhost kernel: SELinux:  policy capability always_check_network=0
Dec 11 08:30:24 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 11 08:30:24 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 11 08:30:24 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 11 08:30:24 localhost kernel: audit: type=1403 audit(1765441823.370:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Dec 11 08:30:24 localhost systemd[1]: Successfully loaded SELinux policy in 317.328ms.
Dec 11 08:30:24 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 34.827ms.
Dec 11 08:30:24 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec 11 08:30:24 localhost systemd[1]: Detected virtualization kvm.
Dec 11 08:30:24 localhost systemd[1]: Detected architecture x86-64.
Dec 11 08:30:24 localhost systemd-rc-local-generator[644]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:30:24 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Dec 11 08:30:24 localhost systemd[1]: Stopped Switch Root.
Dec 11 08:30:24 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Dec 11 08:30:24 localhost systemd[1]: Created slice Slice /system/getty.
Dec 11 08:30:24 localhost systemd[1]: Created slice Slice /system/serial-getty.
Dec 11 08:30:24 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Dec 11 08:30:24 localhost systemd[1]: Created slice User and Session Slice.
Dec 11 08:30:24 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Dec 11 08:30:24 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Dec 11 08:30:24 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Dec 11 08:30:24 localhost systemd[1]: Reached target Local Encrypted Volumes.
Dec 11 08:30:24 localhost systemd[1]: Stopped target Switch Root.
Dec 11 08:30:24 localhost systemd[1]: Stopped target Initrd File Systems.
Dec 11 08:30:24 localhost systemd[1]: Stopped target Initrd Root File System.
Dec 11 08:30:24 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Dec 11 08:30:24 localhost systemd[1]: Reached target Path Units.
Dec 11 08:30:24 localhost systemd[1]: Reached target rpc_pipefs.target.
Dec 11 08:30:24 localhost systemd[1]: Reached target Slice Units.
Dec 11 08:30:24 localhost systemd[1]: Reached target Swaps.
Dec 11 08:30:24 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Dec 11 08:30:24 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Dec 11 08:30:24 localhost systemd[1]: Reached target RPC Port Mapper.
Dec 11 08:30:24 localhost systemd[1]: Listening on Process Core Dump Socket.
Dec 11 08:30:24 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Dec 11 08:30:24 localhost systemd[1]: Listening on udev Control Socket.
Dec 11 08:30:24 localhost systemd[1]: Listening on udev Kernel Socket.
Dec 11 08:30:24 localhost systemd[1]: Mounting Huge Pages File System...
Dec 11 08:30:24 localhost systemd[1]: Mounting POSIX Message Queue File System...
Dec 11 08:30:24 localhost systemd[1]: Mounting Kernel Debug File System...
Dec 11 08:30:24 localhost systemd[1]: Mounting Kernel Trace File System...
Dec 11 08:30:24 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec 11 08:30:24 localhost systemd[1]: Starting Create List of Static Device Nodes...
Dec 11 08:30:24 localhost systemd[1]: Starting Load Kernel Module configfs...
Dec 11 08:30:24 localhost systemd[1]: Starting Load Kernel Module drm...
Dec 11 08:30:24 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Dec 11 08:30:24 localhost systemd[1]: Starting Load Kernel Module fuse...
Dec 11 08:30:24 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Dec 11 08:30:24 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Dec 11 08:30:24 localhost systemd[1]: Stopped File System Check on Root Device.
Dec 11 08:30:24 localhost systemd[1]: Stopped Journal Service.
Dec 11 08:30:24 localhost systemd[1]: Starting Journal Service...
Dec 11 08:30:24 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec 11 08:30:24 localhost systemd[1]: Starting Generate network units from Kernel command line...
Dec 11 08:30:24 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 11 08:30:24 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Dec 11 08:30:24 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Dec 11 08:30:24 localhost systemd[1]: Starting Apply Kernel Variables...
Dec 11 08:30:24 localhost kernel: fuse: init (API version 7.37)
Dec 11 08:30:24 localhost systemd[1]: Starting Coldplug All udev Devices...
Dec 11 08:30:24 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Dec 11 08:30:24 localhost systemd-journald[685]: Journal started
Dec 11 08:30:24 localhost systemd-journald[685]: Runtime Journal (/run/log/journal/64f1d6692049d8be5e8b216cc203502c) is 8.0M, max 153.6M, 145.6M free.
Dec 11 08:30:23 localhost systemd[1]: Queued start job for default target Multi-User System.
Dec 11 08:30:23 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Dec 11 08:30:24 localhost systemd[1]: Started Journal Service.
Dec 11 08:30:24 localhost systemd[1]: Mounted Huge Pages File System.
Dec 11 08:30:24 localhost systemd[1]: Mounted POSIX Message Queue File System.
Dec 11 08:30:24 localhost systemd[1]: Mounted Kernel Debug File System.
Dec 11 08:30:24 localhost systemd[1]: Mounted Kernel Trace File System.
Dec 11 08:30:24 localhost systemd[1]: Finished Create List of Static Device Nodes.
Dec 11 08:30:24 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 11 08:30:24 localhost systemd[1]: Finished Load Kernel Module configfs.
Dec 11 08:30:24 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec 11 08:30:24 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Dec 11 08:30:24 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Dec 11 08:30:24 localhost systemd[1]: Finished Load Kernel Module fuse.
Dec 11 08:30:24 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Dec 11 08:30:24 localhost systemd[1]: Finished Generate network units from Kernel command line.
Dec 11 08:30:24 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Dec 11 08:30:24 localhost systemd[1]: Finished Apply Kernel Variables.
Dec 11 08:30:24 localhost kernel: ACPI: bus type drm_connector registered
Dec 11 08:30:24 localhost systemd[1]: Mounting FUSE Control File System...
Dec 11 08:30:24 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec 11 08:30:24 localhost systemd[1]: Starting Rebuild Hardware Database...
Dec 11 08:30:24 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Dec 11 08:30:24 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Dec 11 08:30:24 localhost systemd[1]: Starting Load/Save OS Random Seed...
Dec 11 08:30:24 localhost systemd[1]: Starting Create System Users...
Dec 11 08:30:24 localhost systemd-journald[685]: Runtime Journal (/run/log/journal/64f1d6692049d8be5e8b216cc203502c) is 8.0M, max 153.6M, 145.6M free.
Dec 11 08:30:24 localhost systemd-journald[685]: Received client request to flush runtime journal.
Dec 11 08:30:24 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Dec 11 08:30:24 localhost systemd[1]: Finished Load Kernel Module drm.
Dec 11 08:30:24 localhost systemd[1]: Mounted FUSE Control File System.
Dec 11 08:30:24 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Dec 11 08:30:24 localhost systemd[1]: Finished Load/Save OS Random Seed.
Dec 11 08:30:24 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec 11 08:30:24 localhost systemd[1]: Finished Create System Users.
Dec 11 08:30:24 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Dec 11 08:30:24 localhost systemd[1]: Finished Coldplug All udev Devices.
Dec 11 08:30:24 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Dec 11 08:30:24 localhost systemd[1]: Reached target Preparation for Local File Systems.
Dec 11 08:30:24 localhost systemd[1]: Reached target Local File Systems.
Dec 11 08:30:24 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Dec 11 08:30:24 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Dec 11 08:30:24 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Dec 11 08:30:24 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Dec 11 08:30:24 localhost systemd[1]: Starting Automatic Boot Loader Update...
Dec 11 08:30:24 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Dec 11 08:30:24 localhost systemd[1]: Starting Create Volatile Files and Directories...
Dec 11 08:30:24 localhost bootctl[704]: Couldn't find EFI system partition, skipping.
Dec 11 08:30:24 localhost systemd[1]: Finished Automatic Boot Loader Update.
Dec 11 08:30:24 localhost systemd[1]: Finished Create Volatile Files and Directories.
Dec 11 08:30:24 localhost systemd[1]: Starting Security Auditing Service...
Dec 11 08:30:24 localhost systemd[1]: Starting RPC Bind...
Dec 11 08:30:24 localhost systemd[1]: Starting Rebuild Journal Catalog...
Dec 11 08:30:24 localhost auditd[710]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Dec 11 08:30:24 localhost systemd[1]: Finished Rebuild Journal Catalog.
Dec 11 08:30:24 localhost systemd[1]: Started RPC Bind.
Dec 11 08:30:24 localhost auditd[710]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Dec 11 08:30:25 localhost augenrules[715]: /sbin/augenrules: No change
Dec 11 08:30:25 localhost augenrules[730]: No rules
Dec 11 08:30:25 localhost augenrules[730]: enabled 1
Dec 11 08:30:25 localhost augenrules[730]: failure 1
Dec 11 08:30:25 localhost augenrules[730]: pid 710
Dec 11 08:30:25 localhost augenrules[730]: rate_limit 0
Dec 11 08:30:25 localhost augenrules[730]: backlog_limit 8192
Dec 11 08:30:25 localhost augenrules[730]: lost 0
Dec 11 08:30:25 localhost augenrules[730]: backlog 2
Dec 11 08:30:25 localhost augenrules[730]: backlog_wait_time 60000
Dec 11 08:30:25 localhost augenrules[730]: backlog_wait_time_actual 0
Dec 11 08:30:25 localhost augenrules[730]: enabled 1
Dec 11 08:30:25 localhost augenrules[730]: failure 1
Dec 11 08:30:25 localhost augenrules[730]: pid 710
Dec 11 08:30:25 localhost augenrules[730]: rate_limit 0
Dec 11 08:30:25 localhost augenrules[730]: backlog_limit 8192
Dec 11 08:30:25 localhost augenrules[730]: lost 0
Dec 11 08:30:25 localhost augenrules[730]: backlog 4
Dec 11 08:30:25 localhost augenrules[730]: backlog_wait_time 60000
Dec 11 08:30:25 localhost augenrules[730]: backlog_wait_time_actual 0
Dec 11 08:30:25 localhost augenrules[730]: enabled 1
Dec 11 08:30:25 localhost augenrules[730]: failure 1
Dec 11 08:30:25 localhost augenrules[730]: pid 710
Dec 11 08:30:25 localhost augenrules[730]: rate_limit 0
Dec 11 08:30:25 localhost augenrules[730]: backlog_limit 8192
Dec 11 08:30:25 localhost augenrules[730]: lost 0
Dec 11 08:30:25 localhost augenrules[730]: backlog 8
Dec 11 08:30:25 localhost augenrules[730]: backlog_wait_time 60000
Dec 11 08:30:25 localhost augenrules[730]: backlog_wait_time_actual 0
Dec 11 08:30:25 localhost systemd[1]: Started Security Auditing Service.
Dec 11 08:30:25 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Dec 11 08:30:25 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Dec 11 08:30:25 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Dec 11 08:30:26 localhost systemd[1]: Finished Rebuild Hardware Database.
Dec 11 08:30:26 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec 11 08:30:26 localhost systemd[1]: Starting Update is Completed...
Dec 11 08:30:26 localhost systemd[1]: Finished Update is Completed.
Dec 11 08:30:26 localhost systemd-udevd[738]: Using default interface naming scheme 'rhel-9.0'.
Dec 11 08:30:26 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec 11 08:30:26 localhost systemd[1]: Reached target System Initialization.
Dec 11 08:30:26 localhost systemd[1]: Started dnf makecache --timer.
Dec 11 08:30:26 localhost systemd[1]: Started Daily rotation of log files.
Dec 11 08:30:26 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Dec 11 08:30:26 localhost systemd[1]: Reached target Timer Units.
Dec 11 08:30:26 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Dec 11 08:30:26 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Dec 11 08:30:26 localhost systemd[1]: Reached target Socket Units.
Dec 11 08:30:26 localhost systemd[1]: Starting D-Bus System Message Bus...
Dec 11 08:30:26 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 11 08:30:26 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Dec 11 08:30:26 localhost systemd[1]: Starting Load Kernel Module configfs...
Dec 11 08:30:26 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 11 08:30:26 localhost systemd[1]: Finished Load Kernel Module configfs.
Dec 11 08:30:26 localhost systemd-udevd[745]: Network interface NamePolicy= disabled on kernel command line.
Dec 11 08:30:26 localhost systemd[1]: Started D-Bus System Message Bus.
Dec 11 08:30:26 localhost systemd[1]: Reached target Basic System.
Dec 11 08:30:26 localhost dbus-broker-lau[746]: Ready
Dec 11 08:30:26 localhost systemd[1]: Starting NTP client/server...
Dec 11 08:30:26 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Dec 11 08:30:26 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Dec 11 08:30:26 localhost systemd[1]: Starting IPv4 firewall with iptables...
Dec 11 08:30:26 localhost systemd[1]: Started irqbalance daemon.
Dec 11 08:30:26 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Dec 11 08:30:26 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 11 08:30:26 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 11 08:30:26 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 11 08:30:26 localhost systemd[1]: Reached target sshd-keygen.target.
Dec 11 08:30:26 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Dec 11 08:30:26 localhost systemd[1]: Reached target User and Group Name Lookups.
Dec 11 08:30:26 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Dec 11 08:30:26 localhost systemd[1]: Starting User Login Management...
Dec 11 08:30:26 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Dec 11 08:30:26 localhost chronyd[794]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec 11 08:30:26 localhost chronyd[794]: Loaded 0 symmetric keys
Dec 11 08:30:26 localhost chronyd[794]: Using right/UTC timezone to obtain leap second data
Dec 11 08:30:26 localhost chronyd[794]: Loaded seccomp filter (level 2)
Dec 11 08:30:26 localhost systemd[1]: Started NTP client/server.
Dec 11 08:30:26 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Dec 11 08:30:26 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Dec 11 08:30:26 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Dec 11 08:30:27 localhost systemd-logind[792]: Watching system buttons on /dev/input/event0 (Power Button)
Dec 11 08:30:27 localhost systemd-logind[792]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec 11 08:30:27 localhost systemd-logind[792]: New seat seat0.
Dec 11 08:30:27 localhost systemd[1]: Started User Login Management.
Dec 11 08:30:27 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Dec 11 08:30:27 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Dec 11 08:30:27 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Dec 11 08:30:27 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Dec 11 08:30:27 localhost kernel: Console: switching to colour dummy device 80x25
Dec 11 08:30:27 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Dec 11 08:30:27 localhost kernel: [drm] features: -context_init
Dec 11 08:30:27 localhost kernel: [drm] number of scanouts: 1
Dec 11 08:30:27 localhost kernel: [drm] number of cap sets: 0
Dec 11 08:30:27 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Dec 11 08:30:27 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Dec 11 08:30:27 localhost kernel: Console: switching to colour frame buffer device 128x48
Dec 11 08:30:27 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Dec 11 08:30:27 localhost kernel: kvm_amd: TSC scaling supported
Dec 11 08:30:27 localhost kernel: kvm_amd: Nested Virtualization enabled
Dec 11 08:30:27 localhost kernel: kvm_amd: Nested Paging enabled
Dec 11 08:30:27 localhost kernel: kvm_amd: LBR virtualization supported
Dec 11 08:30:27 localhost iptables.init[786]: iptables: Applying firewall rules: [  OK  ]
Dec 11 08:30:27 localhost systemd[1]: Finished IPv4 firewall with iptables.
Dec 11 08:30:27 localhost cloud-init[846]: Cloud-init v. 24.4-7.el9 running 'init-local' at Thu, 11 Dec 2025 08:30:27 +0000. Up 11.49 seconds.
Dec 11 08:30:27 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Dec 11 08:30:27 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Dec 11 08:30:27 localhost systemd[1]: run-cloud\x2dinit-tmp-tmp8wh0eo_k.mount: Deactivated successfully.
Dec 11 08:30:27 localhost systemd[1]: Starting Hostname Service...
Dec 11 08:30:27 localhost systemd[1]: Started Hostname Service.
Dec 11 08:30:27 np0005555077.novalocal systemd-hostnamed[860]: Hostname set to <np0005555077.novalocal> (static)
Dec 11 08:30:28 np0005555077.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Dec 11 08:30:28 np0005555077.novalocal systemd[1]: Reached target Preparation for Network.
Dec 11 08:30:28 np0005555077.novalocal systemd[1]: Starting Network Manager...
Dec 11 08:30:28 np0005555077.novalocal NetworkManager[864]: <info>  [1765441828.1412] NetworkManager (version 1.54.2-1.el9) is starting... (boot:13dd1f60-0a56-492c-a25c-280d72789ed1)
Dec 11 08:30:28 np0005555077.novalocal NetworkManager[864]: <info>  [1765441828.1417] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec 11 08:30:28 np0005555077.novalocal NetworkManager[864]: <info>  [1765441828.1513] manager[0x564333b34000]: monitoring kernel firmware directory '/lib/firmware'.
Dec 11 08:30:28 np0005555077.novalocal NetworkManager[864]: <info>  [1765441828.1547] hostname: hostname: using hostnamed
Dec 11 08:30:28 np0005555077.novalocal NetworkManager[864]: <info>  [1765441828.1547] hostname: static hostname changed from (none) to "np0005555077.novalocal"
Dec 11 08:30:28 np0005555077.novalocal NetworkManager[864]: <info>  [1765441828.1551] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec 11 08:30:28 np0005555077.novalocal NetworkManager[864]: <info>  [1765441828.1672] manager[0x564333b34000]: rfkill: Wi-Fi hardware radio set enabled
Dec 11 08:30:28 np0005555077.novalocal NetworkManager[864]: <info>  [1765441828.1673] manager[0x564333b34000]: rfkill: WWAN hardware radio set enabled
Dec 11 08:30:28 np0005555077.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Dec 11 08:30:28 np0005555077.novalocal NetworkManager[864]: <info>  [1765441828.1728] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-device-plugin-team.so)
Dec 11 08:30:28 np0005555077.novalocal NetworkManager[864]: <info>  [1765441828.1729] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec 11 08:30:28 np0005555077.novalocal NetworkManager[864]: <info>  [1765441828.1729] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec 11 08:30:28 np0005555077.novalocal NetworkManager[864]: <info>  [1765441828.1730] manager: Networking is enabled by state file
Dec 11 08:30:28 np0005555077.novalocal NetworkManager[864]: <info>  [1765441828.1732] settings: Loaded settings plugin: keyfile (internal)
Dec 11 08:30:28 np0005555077.novalocal NetworkManager[864]: <info>  [1765441828.1742] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec 11 08:30:28 np0005555077.novalocal NetworkManager[864]: <info>  [1765441828.1760] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec 11 08:30:28 np0005555077.novalocal NetworkManager[864]: <info>  [1765441828.1770] dhcp: init: Using DHCP client 'internal'
Dec 11 08:30:28 np0005555077.novalocal NetworkManager[864]: <info>  [1765441828.1772] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec 11 08:30:28 np0005555077.novalocal NetworkManager[864]: <info>  [1765441828.1783] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 11 08:30:28 np0005555077.novalocal NetworkManager[864]: <info>  [1765441828.1789] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec 11 08:30:28 np0005555077.novalocal NetworkManager[864]: <info>  [1765441828.1796] device (lo): Activation: starting connection 'lo' (cc03eaf0-9208-4de5-bc26-72936417c77f)
Dec 11 08:30:28 np0005555077.novalocal NetworkManager[864]: <info>  [1765441828.1804] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec 11 08:30:28 np0005555077.novalocal NetworkManager[864]: <info>  [1765441828.1805] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 11 08:30:28 np0005555077.novalocal NetworkManager[864]: <info>  [1765441828.1866] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec 11 08:30:28 np0005555077.novalocal NetworkManager[864]: <info>  [1765441828.1871] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec 11 08:30:28 np0005555077.novalocal NetworkManager[864]: <info>  [1765441828.1873] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec 11 08:30:28 np0005555077.novalocal NetworkManager[864]: <info>  [1765441828.1875] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec 11 08:30:28 np0005555077.novalocal NetworkManager[864]: <info>  [1765441828.1878] device (eth0): carrier: link connected
Dec 11 08:30:28 np0005555077.novalocal NetworkManager[864]: <info>  [1765441828.1882] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec 11 08:30:28 np0005555077.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 11 08:30:28 np0005555077.novalocal NetworkManager[864]: <info>  [1765441828.1901] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec 11 08:30:28 np0005555077.novalocal NetworkManager[864]: <info>  [1765441828.1914] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 11 08:30:28 np0005555077.novalocal systemd[1]: Started Network Manager.
Dec 11 08:30:28 np0005555077.novalocal NetworkManager[864]: <info>  [1765441828.1919] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 11 08:30:28 np0005555077.novalocal NetworkManager[864]: <info>  [1765441828.1920] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 11 08:30:28 np0005555077.novalocal NetworkManager[864]: <info>  [1765441828.1922] manager: NetworkManager state is now CONNECTING
Dec 11 08:30:28 np0005555077.novalocal NetworkManager[864]: <info>  [1765441828.1924] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 11 08:30:28 np0005555077.novalocal systemd[1]: Reached target Network.
Dec 11 08:30:28 np0005555077.novalocal NetworkManager[864]: <info>  [1765441828.1934] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 11 08:30:28 np0005555077.novalocal NetworkManager[864]: <info>  [1765441828.1938] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 11 08:30:28 np0005555077.novalocal systemd[1]: Starting Network Manager Wait Online...
Dec 11 08:30:28 np0005555077.novalocal NetworkManager[864]: <info>  [1765441828.1987] dhcp4 (eth0): state changed new lease, address=38.102.83.223
Dec 11 08:30:28 np0005555077.novalocal NetworkManager[864]: <info>  [1765441828.1998] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec 11 08:30:28 np0005555077.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Dec 11 08:30:28 np0005555077.novalocal NetworkManager[864]: <info>  [1765441828.2018] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 11 08:30:28 np0005555077.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 11 08:30:28 np0005555077.novalocal NetworkManager[864]: <info>  [1765441828.2093] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec 11 08:30:28 np0005555077.novalocal NetworkManager[864]: <info>  [1765441828.2095] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 11 08:30:28 np0005555077.novalocal NetworkManager[864]: <info>  [1765441828.2097] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec 11 08:30:28 np0005555077.novalocal NetworkManager[864]: <info>  [1765441828.2101] device (lo): Activation: successful, device activated.
Dec 11 08:30:28 np0005555077.novalocal NetworkManager[864]: <info>  [1765441828.2107] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 11 08:30:28 np0005555077.novalocal NetworkManager[864]: <info>  [1765441828.2110] manager: NetworkManager state is now CONNECTED_SITE
Dec 11 08:30:28 np0005555077.novalocal NetworkManager[864]: <info>  [1765441828.2112] device (eth0): Activation: successful, device activated.
Dec 11 08:30:28 np0005555077.novalocal NetworkManager[864]: <info>  [1765441828.2116] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec 11 08:30:28 np0005555077.novalocal NetworkManager[864]: <info>  [1765441828.2120] manager: startup complete
Dec 11 08:30:28 np0005555077.novalocal systemd[1]: Finished Network Manager Wait Online.
Dec 11 08:30:28 np0005555077.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Dec 11 08:30:28 np0005555077.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Dec 11 08:30:28 np0005555077.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec 11 08:30:28 np0005555077.novalocal systemd[1]: Reached target NFS client services.
Dec 11 08:30:28 np0005555077.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Dec 11 08:30:28 np0005555077.novalocal systemd[1]: Reached target Remote File Systems.
Dec 11 08:30:28 np0005555077.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 11 08:30:28 np0005555077.novalocal cloud-init[927]: Cloud-init v. 24.4-7.el9 running 'init' at Thu, 11 Dec 2025 08:30:28 +0000. Up 12.50 seconds.
Dec 11 08:30:28 np0005555077.novalocal cloud-init[927]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Dec 11 08:30:28 np0005555077.novalocal cloud-init[927]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec 11 08:30:28 np0005555077.novalocal cloud-init[927]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Dec 11 08:30:28 np0005555077.novalocal cloud-init[927]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec 11 08:30:28 np0005555077.novalocal cloud-init[927]: ci-info: |  eth0  | True |        38.102.83.223         | 255.255.255.0 | global | fa:16:3e:43:e1:76 |
Dec 11 08:30:28 np0005555077.novalocal cloud-init[927]: ci-info: |  eth0  | True | fe80::f816:3eff:fe43:e176/64 |       .       |  link  | fa:16:3e:43:e1:76 |
Dec 11 08:30:28 np0005555077.novalocal cloud-init[927]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Dec 11 08:30:28 np0005555077.novalocal cloud-init[927]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Dec 11 08:30:28 np0005555077.novalocal cloud-init[927]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec 11 08:30:28 np0005555077.novalocal cloud-init[927]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Dec 11 08:30:28 np0005555077.novalocal cloud-init[927]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec 11 08:30:28 np0005555077.novalocal cloud-init[927]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Dec 11 08:30:28 np0005555077.novalocal cloud-init[927]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec 11 08:30:28 np0005555077.novalocal cloud-init[927]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Dec 11 08:30:28 np0005555077.novalocal cloud-init[927]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Dec 11 08:30:28 np0005555077.novalocal cloud-init[927]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Dec 11 08:30:28 np0005555077.novalocal cloud-init[927]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec 11 08:30:28 np0005555077.novalocal cloud-init[927]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Dec 11 08:30:28 np0005555077.novalocal cloud-init[927]: ci-info: +-------+-------------+---------+-----------+-------+
Dec 11 08:30:28 np0005555077.novalocal cloud-init[927]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Dec 11 08:30:28 np0005555077.novalocal cloud-init[927]: ci-info: +-------+-------------+---------+-----------+-------+
Dec 11 08:30:28 np0005555077.novalocal cloud-init[927]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Dec 11 08:30:28 np0005555077.novalocal cloud-init[927]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Dec 11 08:30:28 np0005555077.novalocal cloud-init[927]: ci-info: +-------+-------------+---------+-----------+-------+
Dec 11 08:30:29 np0005555077.novalocal useradd[993]: new group: name=cloud-user, GID=1001
Dec 11 08:30:29 np0005555077.novalocal useradd[993]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Dec 11 08:30:29 np0005555077.novalocal useradd[993]: add 'cloud-user' to group 'adm'
Dec 11 08:30:29 np0005555077.novalocal useradd[993]: add 'cloud-user' to group 'systemd-journal'
Dec 11 08:30:29 np0005555077.novalocal useradd[993]: add 'cloud-user' to shadow group 'adm'
Dec 11 08:30:29 np0005555077.novalocal useradd[993]: add 'cloud-user' to shadow group 'systemd-journal'
Dec 11 08:30:29 np0005555077.novalocal cloud-init[927]: Generating public/private rsa key pair.
Dec 11 08:30:29 np0005555077.novalocal cloud-init[927]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Dec 11 08:30:29 np0005555077.novalocal cloud-init[927]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Dec 11 08:30:29 np0005555077.novalocal cloud-init[927]: The key fingerprint is:
Dec 11 08:30:29 np0005555077.novalocal cloud-init[927]: SHA256:+Iqzb0ZcZR2OgxVTGn7JDo/TXhcw6vYjNc+dsGP2P7U root@np0005555077.novalocal
Dec 11 08:30:29 np0005555077.novalocal cloud-init[927]: The key's randomart image is:
Dec 11 08:30:29 np0005555077.novalocal cloud-init[927]: +---[RSA 3072]----+
Dec 11 08:30:29 np0005555077.novalocal cloud-init[927]: |          =+o+   |
Dec 11 08:30:29 np0005555077.novalocal cloud-init[927]: |         +oBo.o  |
Dec 11 08:30:29 np0005555077.novalocal cloud-init[927]: |        .o*.=  . |
Dec 11 08:30:29 np0005555077.novalocal cloud-init[927]: |       .. .O    .|
Dec 11 08:30:29 np0005555077.novalocal cloud-init[927]: |     ...S oo++. .|
Dec 11 08:30:29 np0005555077.novalocal cloud-init[927]: |      o.  .oo.*.+|
Dec 11 08:30:29 np0005555077.novalocal cloud-init[927]: |     .  .  ..B ++|
Dec 11 08:30:29 np0005555077.novalocal cloud-init[927]: |    ..o.    + +E |
Dec 11 08:30:29 np0005555077.novalocal cloud-init[927]: |    o*o        .+|
Dec 11 08:30:29 np0005555077.novalocal cloud-init[927]: +----[SHA256]-----+
Dec 11 08:30:29 np0005555077.novalocal cloud-init[927]: Generating public/private ecdsa key pair.
Dec 11 08:30:29 np0005555077.novalocal cloud-init[927]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Dec 11 08:30:29 np0005555077.novalocal cloud-init[927]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Dec 11 08:30:29 np0005555077.novalocal cloud-init[927]: The key fingerprint is:
Dec 11 08:30:29 np0005555077.novalocal cloud-init[927]: SHA256:Yw8JqPtpbJN/FhEPS2LxIzoxbdJBvQzKuY5awf9X8t4 root@np0005555077.novalocal
Dec 11 08:30:29 np0005555077.novalocal cloud-init[927]: The key's randomart image is:
Dec 11 08:30:29 np0005555077.novalocal cloud-init[927]: +---[ECDSA 256]---+
Dec 11 08:30:29 np0005555077.novalocal cloud-init[927]: |     .+o         |
Dec 11 08:30:29 np0005555077.novalocal cloud-init[927]: |     ++o=        |
Dec 11 08:30:29 np0005555077.novalocal cloud-init[927]: |   .=+*=o*       |
Dec 11 08:30:29 np0005555077.novalocal cloud-init[927]: | . .+* o=o.      |
Dec 11 08:30:29 np0005555077.novalocal cloud-init[927]: |  + o.  S.       |
Dec 11 08:30:29 np0005555077.novalocal cloud-init[927]: |   +.. .o+.      |
Dec 11 08:30:29 np0005555077.novalocal cloud-init[927]: |  o+..   =.      |
Dec 11 08:30:29 np0005555077.novalocal cloud-init[927]: | ...Bo  + ..     |
Dec 11 08:30:29 np0005555077.novalocal cloud-init[927]: |.. oooo+ .. E    |
Dec 11 08:30:29 np0005555077.novalocal cloud-init[927]: +----[SHA256]-----+
Dec 11 08:30:29 np0005555077.novalocal cloud-init[927]: Generating public/private ed25519 key pair.
Dec 11 08:30:29 np0005555077.novalocal cloud-init[927]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Dec 11 08:30:29 np0005555077.novalocal cloud-init[927]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Dec 11 08:30:29 np0005555077.novalocal cloud-init[927]: The key fingerprint is:
Dec 11 08:30:29 np0005555077.novalocal cloud-init[927]: SHA256:xh97wh4YN5wGsGJzbLwggR5iUuk28a+k/GQKFxuQ8wU root@np0005555077.novalocal
Dec 11 08:30:29 np0005555077.novalocal cloud-init[927]: The key's randomart image is:
Dec 11 08:30:29 np0005555077.novalocal cloud-init[927]: +--[ED25519 256]--+
Dec 11 08:30:29 np0005555077.novalocal cloud-init[927]: | oE.  .          |
Dec 11 08:30:29 np0005555077.novalocal cloud-init[927]: |++oo o o         |
Dec 11 08:30:29 np0005555077.novalocal cloud-init[927]: |O.oo* * .        |
Dec 11 08:30:29 np0005555077.novalocal cloud-init[927]: | =++.* o o .     |
Dec 11 08:30:29 np0005555077.novalocal cloud-init[927]: | .+. .. S B      |
Dec 11 08:30:29 np0005555077.novalocal cloud-init[927]: |   +. .. B +     |
Dec 11 08:30:29 np0005555077.novalocal cloud-init[927]: |..ooo.  . * .    |
Dec 11 08:30:29 np0005555077.novalocal cloud-init[927]: | oo+.    . +     |
Dec 11 08:30:29 np0005555077.novalocal cloud-init[927]: |  ...     .      |
Dec 11 08:30:29 np0005555077.novalocal cloud-init[927]: +----[SHA256]-----+
Dec 11 08:30:30 np0005555077.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Dec 11 08:30:30 np0005555077.novalocal systemd[1]: Reached target Cloud-config availability.
Dec 11 08:30:30 np0005555077.novalocal systemd[1]: Reached target Network is Online.
Dec 11 08:30:30 np0005555077.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Dec 11 08:30:30 np0005555077.novalocal systemd[1]: Starting Crash recovery kernel arming...
Dec 11 08:30:30 np0005555077.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Dec 11 08:30:30 np0005555077.novalocal systemd[1]: Starting System Logging Service...
Dec 11 08:30:30 np0005555077.novalocal systemd[1]: Starting OpenSSH server daemon...
Dec 11 08:30:30 np0005555077.novalocal systemd[1]: Starting Permit User Sessions...
Dec 11 08:30:30 np0005555077.novalocal sm-notify[1009]: Version 2.5.4 starting
Dec 11 08:30:30 np0005555077.novalocal systemd[1]: Started Notify NFS peers of a restart.
Dec 11 08:30:30 np0005555077.novalocal sshd[1011]: Server listening on 0.0.0.0 port 22.
Dec 11 08:30:30 np0005555077.novalocal sshd[1011]: Server listening on :: port 22.
Dec 11 08:30:30 np0005555077.novalocal systemd[1]: Started OpenSSH server daemon.
Dec 11 08:30:30 np0005555077.novalocal systemd[1]: Finished Permit User Sessions.
Dec 11 08:30:30 np0005555077.novalocal systemd[1]: Started Command Scheduler.
Dec 11 08:30:30 np0005555077.novalocal systemd[1]: Started Getty on tty1.
Dec 11 08:30:30 np0005555077.novalocal systemd[1]: Started Serial Getty on ttyS0.
Dec 11 08:30:30 np0005555077.novalocal crond[1014]: (CRON) STARTUP (1.5.7)
Dec 11 08:30:30 np0005555077.novalocal crond[1014]: (CRON) INFO (Syslog will be used instead of sendmail.)
Dec 11 08:30:30 np0005555077.novalocal crond[1014]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 40% if used.)
Dec 11 08:30:30 np0005555077.novalocal systemd[1]: Reached target Login Prompts.
Dec 11 08:30:30 np0005555077.novalocal crond[1014]: (CRON) INFO (running with inotify support)
Dec 11 08:30:30 np0005555077.novalocal rsyslogd[1010]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1010" x-info="https://www.rsyslog.com"] start
Dec 11 08:30:30 np0005555077.novalocal rsyslogd[1010]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Dec 11 08:30:30 np0005555077.novalocal systemd[1]: Started System Logging Service.
Dec 11 08:30:30 np0005555077.novalocal systemd[1]: Reached target Multi-User System.
Dec 11 08:30:30 np0005555077.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Dec 11 08:30:30 np0005555077.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Dec 11 08:30:30 np0005555077.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Dec 11 08:30:30 np0005555077.novalocal rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 11 08:30:30 np0005555077.novalocal kdumpctl[1021]: kdump: No kdump initial ramdisk found.
Dec 11 08:30:30 np0005555077.novalocal kdumpctl[1021]: kdump: Rebuilding /boot/initramfs-5.14.0-648.el9.x86_64kdump.img
Dec 11 08:30:30 np0005555077.novalocal sshd-session[1076]: Unable to negotiate with 38.102.83.114 port 35260: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Dec 11 08:30:30 np0005555077.novalocal sshd-session[1105]: Unable to negotiate with 38.102.83.114 port 35272: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Dec 11 08:30:30 np0005555077.novalocal sshd-session[1112]: Unable to negotiate with 38.102.83.114 port 35286: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Dec 11 08:30:30 np0005555077.novalocal sshd-session[1068]: Connection closed by 38.102.83.114 port 35250 [preauth]
Dec 11 08:30:30 np0005555077.novalocal sshd-session[1151]: Unable to negotiate with 38.102.83.114 port 35316: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Dec 11 08:30:30 np0005555077.novalocal sshd-session[1089]: Connection closed by 38.102.83.114 port 35270 [preauth]
Dec 11 08:30:30 np0005555077.novalocal sshd-session[1159]: Unable to negotiate with 38.102.83.114 port 35318: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Dec 11 08:30:30 np0005555077.novalocal cloud-init[1166]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Thu, 11 Dec 2025 08:30:30 +0000. Up 14.24 seconds.
Dec 11 08:30:30 np0005555077.novalocal sshd-session[1127]: Connection closed by 38.102.83.114 port 35298 [preauth]
Dec 11 08:30:30 np0005555077.novalocal sshd-session[1142]: Connection closed by 38.102.83.114 port 35304 [preauth]
Dec 11 08:30:30 np0005555077.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Dec 11 08:30:30 np0005555077.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Dec 11 08:30:30 np0005555077.novalocal dracut[1288]: dracut-057-102.git20250818.el9
Dec 11 08:30:30 np0005555077.novalocal cloud-init[1306]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Thu, 11 Dec 2025 08:30:30 +0000. Up 14.74 seconds.
Dec 11 08:30:30 np0005555077.novalocal dracut[1290]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/cbdedf45-ed1d-4952-82a8-33a12c0ba266 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-648.el9.x86_64kdump.img 5.14.0-648.el9.x86_64
Dec 11 08:30:30 np0005555077.novalocal cloud-init[1325]: #############################################################
Dec 11 08:30:30 np0005555077.novalocal cloud-init[1328]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Dec 11 08:30:30 np0005555077.novalocal cloud-init[1332]: 256 SHA256:Yw8JqPtpbJN/FhEPS2LxIzoxbdJBvQzKuY5awf9X8t4 root@np0005555077.novalocal (ECDSA)
Dec 11 08:30:30 np0005555077.novalocal cloud-init[1336]: 256 SHA256:xh97wh4YN5wGsGJzbLwggR5iUuk28a+k/GQKFxuQ8wU root@np0005555077.novalocal (ED25519)
Dec 11 08:30:30 np0005555077.novalocal cloud-init[1343]: 3072 SHA256:+Iqzb0ZcZR2OgxVTGn7JDo/TXhcw6vYjNc+dsGP2P7U root@np0005555077.novalocal (RSA)
Dec 11 08:30:30 np0005555077.novalocal cloud-init[1346]: -----END SSH HOST KEY FINGERPRINTS-----
Dec 11 08:30:30 np0005555077.novalocal cloud-init[1348]: #############################################################
Dec 11 08:30:31 np0005555077.novalocal cloud-init[1306]: Cloud-init v. 24.4-7.el9 finished at Thu, 11 Dec 2025 08:30:31 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 14.92 seconds
Dec 11 08:30:31 np0005555077.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Dec 11 08:30:31 np0005555077.novalocal systemd[1]: Reached target Cloud-init target.
Dec 11 08:30:31 np0005555077.novalocal dracut[1290]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Dec 11 08:30:31 np0005555077.novalocal dracut[1290]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Dec 11 08:30:31 np0005555077.novalocal dracut[1290]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Dec 11 08:30:31 np0005555077.novalocal dracut[1290]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec 11 08:30:31 np0005555077.novalocal dracut[1290]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec 11 08:30:31 np0005555077.novalocal dracut[1290]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec 11 08:30:31 np0005555077.novalocal dracut[1290]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec 11 08:30:31 np0005555077.novalocal dracut[1290]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec 11 08:30:31 np0005555077.novalocal dracut[1290]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec 11 08:30:31 np0005555077.novalocal dracut[1290]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec 11 08:30:31 np0005555077.novalocal dracut[1290]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec 11 08:30:31 np0005555077.novalocal dracut[1290]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec 11 08:30:31 np0005555077.novalocal dracut[1290]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec 11 08:30:31 np0005555077.novalocal dracut[1290]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec 11 08:30:31 np0005555077.novalocal dracut[1290]: Module 'ifcfg' will not be installed, because it's in the list to be omitted!
Dec 11 08:30:31 np0005555077.novalocal dracut[1290]: Module 'plymouth' will not be installed, because it's in the list to be omitted!
Dec 11 08:30:31 np0005555077.novalocal dracut[1290]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec 11 08:30:31 np0005555077.novalocal dracut[1290]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec 11 08:30:31 np0005555077.novalocal dracut[1290]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec 11 08:30:31 np0005555077.novalocal dracut[1290]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec 11 08:30:31 np0005555077.novalocal dracut[1290]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec 11 08:30:31 np0005555077.novalocal dracut[1290]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec 11 08:30:31 np0005555077.novalocal dracut[1290]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec 11 08:30:31 np0005555077.novalocal dracut[1290]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec 11 08:30:31 np0005555077.novalocal dracut[1290]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec 11 08:30:31 np0005555077.novalocal dracut[1290]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec 11 08:30:32 np0005555077.novalocal dracut[1290]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec 11 08:30:32 np0005555077.novalocal dracut[1290]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec 11 08:30:32 np0005555077.novalocal dracut[1290]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec 11 08:30:32 np0005555077.novalocal dracut[1290]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec 11 08:30:32 np0005555077.novalocal dracut[1290]: Module 'resume' will not be installed, because it's in the list to be omitted!
Dec 11 08:30:32 np0005555077.novalocal dracut[1290]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Dec 11 08:30:32 np0005555077.novalocal dracut[1290]: Module 'earlykdump' will not be installed, because it's in the list to be omitted!
Dec 11 08:30:32 np0005555077.novalocal dracut[1290]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec 11 08:30:32 np0005555077.novalocal dracut[1290]: memstrack is not available
Dec 11 08:30:32 np0005555077.novalocal dracut[1290]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec 11 08:30:32 np0005555077.novalocal dracut[1290]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec 11 08:30:32 np0005555077.novalocal dracut[1290]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec 11 08:30:32 np0005555077.novalocal dracut[1290]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec 11 08:30:32 np0005555077.novalocal dracut[1290]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec 11 08:30:32 np0005555077.novalocal dracut[1290]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec 11 08:30:32 np0005555077.novalocal dracut[1290]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec 11 08:30:32 np0005555077.novalocal dracut[1290]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec 11 08:30:32 np0005555077.novalocal dracut[1290]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec 11 08:30:32 np0005555077.novalocal dracut[1290]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec 11 08:30:32 np0005555077.novalocal dracut[1290]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec 11 08:30:32 np0005555077.novalocal dracut[1290]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec 11 08:30:32 np0005555077.novalocal dracut[1290]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec 11 08:30:32 np0005555077.novalocal dracut[1290]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec 11 08:30:32 np0005555077.novalocal dracut[1290]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec 11 08:30:32 np0005555077.novalocal dracut[1290]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec 11 08:30:32 np0005555077.novalocal dracut[1290]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec 11 08:30:32 np0005555077.novalocal dracut[1290]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec 11 08:30:32 np0005555077.novalocal dracut[1290]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec 11 08:30:32 np0005555077.novalocal dracut[1290]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec 11 08:30:32 np0005555077.novalocal dracut[1290]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec 11 08:30:32 np0005555077.novalocal dracut[1290]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec 11 08:30:32 np0005555077.novalocal dracut[1290]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec 11 08:30:32 np0005555077.novalocal dracut[1290]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec 11 08:30:32 np0005555077.novalocal dracut[1290]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec 11 08:30:32 np0005555077.novalocal dracut[1290]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec 11 08:30:32 np0005555077.novalocal dracut[1290]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec 11 08:30:32 np0005555077.novalocal dracut[1290]: memstrack is not available
Dec 11 08:30:32 np0005555077.novalocal dracut[1290]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec 11 08:30:32 np0005555077.novalocal dracut[1290]: *** Including module: systemd ***
Dec 11 08:30:32 np0005555077.novalocal dracut[1290]: *** Including module: fips ***
Dec 11 08:30:33 np0005555077.novalocal chronyd[794]: Selected source 174.142.148.226 (2.centos.pool.ntp.org)
Dec 11 08:30:33 np0005555077.novalocal chronyd[794]: System clock TAI offset set to 37 seconds
Dec 11 08:30:33 np0005555077.novalocal dracut[1290]: *** Including module: systemd-initrd ***
Dec 11 08:30:33 np0005555077.novalocal dracut[1290]: *** Including module: i18n ***
Dec 11 08:30:33 np0005555077.novalocal dracut[1290]: *** Including module: drm ***
Dec 11 08:30:33 np0005555077.novalocal dracut[1290]: *** Including module: prefixdevname ***
Dec 11 08:30:33 np0005555077.novalocal dracut[1290]: *** Including module: kernel-modules ***
Dec 11 08:30:33 np0005555077.novalocal kernel: block vda: the capability attribute has been deprecated.
Dec 11 08:30:34 np0005555077.novalocal dracut[1290]: *** Including module: kernel-modules-extra ***
Dec 11 08:30:34 np0005555077.novalocal dracut[1290]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Dec 11 08:30:34 np0005555077.novalocal dracut[1290]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Dec 11 08:30:34 np0005555077.novalocal dracut[1290]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Dec 11 08:30:34 np0005555077.novalocal dracut[1290]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Dec 11 08:30:34 np0005555077.novalocal dracut[1290]: *** Including module: qemu ***
Dec 11 08:30:34 np0005555077.novalocal dracut[1290]: *** Including module: fstab-sys ***
Dec 11 08:30:34 np0005555077.novalocal dracut[1290]: *** Including module: rootfs-block ***
Dec 11 08:30:34 np0005555077.novalocal dracut[1290]: *** Including module: terminfo ***
Dec 11 08:30:34 np0005555077.novalocal dracut[1290]: *** Including module: udev-rules ***
Dec 11 08:30:34 np0005555077.novalocal dracut[1290]: Skipping udev rule: 91-permissions.rules
Dec 11 08:30:34 np0005555077.novalocal dracut[1290]: Skipping udev rule: 80-drivers-modprobe.rules
Dec 11 08:30:34 np0005555077.novalocal dracut[1290]: *** Including module: virtiofs ***
Dec 11 08:30:34 np0005555077.novalocal dracut[1290]: *** Including module: dracut-systemd ***
Dec 11 08:30:35 np0005555077.novalocal dracut[1290]: *** Including module: usrmount ***
Dec 11 08:30:35 np0005555077.novalocal dracut[1290]: *** Including module: base ***
Dec 11 08:30:35 np0005555077.novalocal chronyd[794]: Selected source 147.189.136.126 (2.centos.pool.ntp.org)
Dec 11 08:30:35 np0005555077.novalocal dracut[1290]: *** Including module: fs-lib ***
Dec 11 08:30:35 np0005555077.novalocal dracut[1290]: *** Including module: kdumpbase ***
Dec 11 08:30:35 np0005555077.novalocal dracut[1290]: *** Including module: microcode_ctl-fw_dir_override ***
Dec 11 08:30:35 np0005555077.novalocal dracut[1290]:   microcode_ctl module: mangling fw_dir
Dec 11 08:30:35 np0005555077.novalocal dracut[1290]:     microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Dec 11 08:30:35 np0005555077.novalocal dracut[1290]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Dec 11 08:30:35 np0005555077.novalocal dracut[1290]:     microcode_ctl: configuration "intel" is ignored
Dec 11 08:30:35 np0005555077.novalocal dracut[1290]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Dec 11 08:30:35 np0005555077.novalocal dracut[1290]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Dec 11 08:30:35 np0005555077.novalocal dracut[1290]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Dec 11 08:30:35 np0005555077.novalocal dracut[1290]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Dec 11 08:30:35 np0005555077.novalocal dracut[1290]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Dec 11 08:30:35 np0005555077.novalocal dracut[1290]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Dec 11 08:30:35 np0005555077.novalocal dracut[1290]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Dec 11 08:30:35 np0005555077.novalocal dracut[1290]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Dec 11 08:30:35 np0005555077.novalocal dracut[1290]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Dec 11 08:30:35 np0005555077.novalocal dracut[1290]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Dec 11 08:30:35 np0005555077.novalocal dracut[1290]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Dec 11 08:30:35 np0005555077.novalocal dracut[1290]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Dec 11 08:30:35 np0005555077.novalocal dracut[1290]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Dec 11 08:30:35 np0005555077.novalocal dracut[1290]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Dec 11 08:30:35 np0005555077.novalocal dracut[1290]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Dec 11 08:30:35 np0005555077.novalocal dracut[1290]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Dec 11 08:30:35 np0005555077.novalocal dracut[1290]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Dec 11 08:30:35 np0005555077.novalocal dracut[1290]:     microcode_ctl: configuration "intel-06-8f-08" is ignored
Dec 11 08:30:35 np0005555077.novalocal dracut[1290]:     microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Dec 11 08:30:35 np0005555077.novalocal dracut[1290]: *** Including module: openssl ***
Dec 11 08:30:36 np0005555077.novalocal dracut[1290]: *** Including module: shutdown ***
Dec 11 08:30:36 np0005555077.novalocal dracut[1290]: *** Including module: squash ***
Dec 11 08:30:36 np0005555077.novalocal dracut[1290]: *** Including modules done ***
Dec 11 08:30:36 np0005555077.novalocal dracut[1290]: *** Installing kernel module dependencies ***
Dec 11 08:30:36 np0005555077.novalocal dracut[1290]: *** Installing kernel module dependencies done ***
Dec 11 08:30:36 np0005555077.novalocal dracut[1290]: *** Resolving executable dependencies ***
Dec 11 08:30:37 np0005555077.novalocal irqbalance[791]: Cannot change IRQ 25 affinity: Operation not permitted
Dec 11 08:30:37 np0005555077.novalocal irqbalance[791]: IRQ 25 affinity is now unmanaged
Dec 11 08:30:37 np0005555077.novalocal irqbalance[791]: Cannot change IRQ 31 affinity: Operation not permitted
Dec 11 08:30:37 np0005555077.novalocal irqbalance[791]: IRQ 31 affinity is now unmanaged
Dec 11 08:30:37 np0005555077.novalocal irqbalance[791]: Cannot change IRQ 28 affinity: Operation not permitted
Dec 11 08:30:37 np0005555077.novalocal irqbalance[791]: IRQ 28 affinity is now unmanaged
Dec 11 08:30:37 np0005555077.novalocal irqbalance[791]: Cannot change IRQ 32 affinity: Operation not permitted
Dec 11 08:30:37 np0005555077.novalocal irqbalance[791]: IRQ 32 affinity is now unmanaged
Dec 11 08:30:37 np0005555077.novalocal irqbalance[791]: Cannot change IRQ 30 affinity: Operation not permitted
Dec 11 08:30:37 np0005555077.novalocal irqbalance[791]: IRQ 30 affinity is now unmanaged
Dec 11 08:30:37 np0005555077.novalocal irqbalance[791]: Cannot change IRQ 29 affinity: Operation not permitted
Dec 11 08:30:37 np0005555077.novalocal irqbalance[791]: IRQ 29 affinity is now unmanaged
Dec 11 08:30:38 np0005555077.novalocal dracut[1290]: *** Resolving executable dependencies done ***
Dec 11 08:30:38 np0005555077.novalocal dracut[1290]: *** Generating early-microcode cpio image ***
Dec 11 08:30:38 np0005555077.novalocal dracut[1290]: *** Store current command line parameters ***
Dec 11 08:30:38 np0005555077.novalocal dracut[1290]: Stored kernel commandline:
Dec 11 08:30:38 np0005555077.novalocal dracut[1290]: No dracut internal kernel commandline stored in the initramfs
Dec 11 08:30:38 np0005555077.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 11 08:30:38 np0005555077.novalocal dracut[1290]: *** Install squash loader ***
Dec 11 08:30:39 np0005555077.novalocal dracut[1290]: *** Squashing the files inside the initramfs ***
Dec 11 08:30:40 np0005555077.novalocal dracut[1290]: *** Squashing the files inside the initramfs done ***
Dec 11 08:30:40 np0005555077.novalocal dracut[1290]: *** Creating image file '/boot/initramfs-5.14.0-648.el9.x86_64kdump.img' ***
Dec 11 08:30:40 np0005555077.novalocal dracut[1290]: *** Hardlinking files ***
Dec 11 08:30:40 np0005555077.novalocal dracut[1290]: Mode:           real
Dec 11 08:30:40 np0005555077.novalocal dracut[1290]: Files:          50
Dec 11 08:30:40 np0005555077.novalocal dracut[1290]: Linked:         0 files
Dec 11 08:30:40 np0005555077.novalocal dracut[1290]: Compared:       0 xattrs
Dec 11 08:30:40 np0005555077.novalocal dracut[1290]: Compared:       0 files
Dec 11 08:30:40 np0005555077.novalocal dracut[1290]: Saved:          0 B
Dec 11 08:30:40 np0005555077.novalocal dracut[1290]: Duration:       0.000421 seconds
Dec 11 08:30:40 np0005555077.novalocal dracut[1290]: *** Hardlinking files done ***
Dec 11 08:30:41 np0005555077.novalocal dracut[1290]: *** Creating initramfs image file '/boot/initramfs-5.14.0-648.el9.x86_64kdump.img' done ***
Dec 11 08:30:42 np0005555077.novalocal kdumpctl[1021]: kdump: kexec: loaded kdump kernel
Dec 11 08:30:42 np0005555077.novalocal kdumpctl[1021]: kdump: Starting kdump: [OK]
Dec 11 08:30:42 np0005555077.novalocal systemd[1]: Finished Crash recovery kernel arming.
Dec 11 08:30:42 np0005555077.novalocal systemd[1]: Startup finished in 3.999s (kernel) + 2.939s (initrd) + 19.460s (userspace) = 26.399s.
Dec 11 08:30:58 np0005555077.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 11 08:31:15 np0005555077.novalocal sshd-session[4301]: Accepted publickey for zuul from 38.102.83.114 port 60478 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Dec 11 08:31:15 np0005555077.novalocal systemd[1]: Created slice User Slice of UID 1000.
Dec 11 08:31:15 np0005555077.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Dec 11 08:31:15 np0005555077.novalocal systemd-logind[792]: New session 1 of user zuul.
Dec 11 08:31:15 np0005555077.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Dec 11 08:31:15 np0005555077.novalocal systemd[1]: Starting User Manager for UID 1000...
Dec 11 08:31:15 np0005555077.novalocal systemd[4305]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 11 08:31:15 np0005555077.novalocal systemd[4305]: Queued start job for default target Main User Target.
Dec 11 08:31:15 np0005555077.novalocal systemd[4305]: Created slice User Application Slice.
Dec 11 08:31:15 np0005555077.novalocal systemd[4305]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 11 08:31:15 np0005555077.novalocal systemd[4305]: Started Daily Cleanup of User's Temporary Directories.
Dec 11 08:31:15 np0005555077.novalocal systemd[4305]: Reached target Paths.
Dec 11 08:31:15 np0005555077.novalocal systemd[4305]: Reached target Timers.
Dec 11 08:31:15 np0005555077.novalocal systemd[4305]: Starting D-Bus User Message Bus Socket...
Dec 11 08:31:15 np0005555077.novalocal systemd[4305]: Starting Create User's Volatile Files and Directories...
Dec 11 08:31:15 np0005555077.novalocal systemd[4305]: Listening on D-Bus User Message Bus Socket.
Dec 11 08:31:15 np0005555077.novalocal systemd[4305]: Reached target Sockets.
Dec 11 08:31:15 np0005555077.novalocal systemd[4305]: Finished Create User's Volatile Files and Directories.
Dec 11 08:31:15 np0005555077.novalocal systemd[4305]: Reached target Basic System.
Dec 11 08:31:15 np0005555077.novalocal systemd[4305]: Reached target Main User Target.
Dec 11 08:31:15 np0005555077.novalocal systemd[4305]: Startup finished in 136ms.
Dec 11 08:31:15 np0005555077.novalocal systemd[1]: Started User Manager for UID 1000.
Dec 11 08:31:15 np0005555077.novalocal systemd[1]: Started Session 1 of User zuul.
Dec 11 08:31:15 np0005555077.novalocal sshd-session[4301]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 11 08:31:16 np0005555077.novalocal python3[4387]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 08:31:19 np0005555077.novalocal python3[4415]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 08:31:27 np0005555077.novalocal python3[4473]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 08:31:28 np0005555077.novalocal python3[4513]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Dec 11 08:31:30 np0005555077.novalocal python3[4539]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCnt2InnHq6tApNig+P5WVoMHw1rlk7UfQdxOhvkyjN645QP7rAPOf+kiZ5vlE1JQdo2PD+c1o83wN4ZpjrJ6P2pHioYrGxNq//bkYcu2OvWWyKacU3XnXkr8D8sgH4mTPrVOFvx0VXPUA5NRbxgeuG5zwJU0pKdPqTFe1Eiyse5nHVWbaLfedSmapHiMrI0jnu0lQTlS7AclHMTRd01iU0vWBay/eZzB7grlUZKUEiMsOjSoWhhTnihf2M/5DM+vrD1mWyMLO+HeWe7Vrwl9JZuj8wWTA3IEK1/dSSboiR2+A5kMPqwDsrDNkrqvaew7lF6rIHRymiOvEtwK7700U6S+tK8EExFTNxrZXxDwvZLYdHVWCxIRNRxS5AxhPBsEqkhKqpmFjffm7AyhHZ2j3rxSir3TmxwVk0QLd2RPG3ypPAWYlz/rfVjwZQwuLY8pqmsnUKb7Lo9hln/NrQfRR5UDbY/j6nzSZyFwgd7KjdHB1Ld9Z/N3unxqaho2c81Zs= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:31:31 np0005555077.novalocal python3[4563]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:31:31 np0005555077.novalocal python3[4662]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 11 08:31:31 np0005555077.novalocal python3[4733]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765441891.3201268-251-118049856376950/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=aa49124d53fb4dc8bbe02719772364ce_id_rsa follow=False checksum=920384a381621a9eadb407e248e38edf0c531f3e backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:31:32 np0005555077.novalocal python3[4856]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 11 08:31:32 np0005555077.novalocal python3[4927]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765441892.2799985-306-147923700939224/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=aa49124d53fb4dc8bbe02719772364ce_id_rsa.pub follow=False checksum=10cf1c2e97c3e098002d40625d0547a51192a872 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:31:34 np0005555077.novalocal python3[4975]: ansible-ping Invoked with data=pong
Dec 11 08:31:35 np0005555077.novalocal python3[4999]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 08:31:37 np0005555077.novalocal irqbalance[791]: Cannot change IRQ 27 affinity: Operation not permitted
Dec 11 08:31:37 np0005555077.novalocal irqbalance[791]: IRQ 27 affinity is now unmanaged
Dec 11 08:31:38 np0005555077.novalocal python3[5057]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Dec 11 08:31:39 np0005555077.novalocal python3[5089]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:31:39 np0005555077.novalocal python3[5113]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:31:39 np0005555077.novalocal python3[5137]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:31:40 np0005555077.novalocal python3[5161]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:31:40 np0005555077.novalocal python3[5185]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:31:40 np0005555077.novalocal python3[5209]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:31:42 np0005555077.novalocal sudo[5233]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfvvoaobrqsdplihzilongbhlpcozivy ; /usr/bin/python3'
Dec 11 08:31:42 np0005555077.novalocal sudo[5233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:31:42 np0005555077.novalocal python3[5235]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:31:42 np0005555077.novalocal sudo[5233]: pam_unix(sudo:session): session closed for user root
Dec 11 08:31:43 np0005555077.novalocal sudo[5311]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmilhvcbaiapdvnewpiioxketspuagrk ; /usr/bin/python3'
Dec 11 08:31:43 np0005555077.novalocal sudo[5311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:31:43 np0005555077.novalocal python3[5313]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 11 08:31:43 np0005555077.novalocal sudo[5311]: pam_unix(sudo:session): session closed for user root
Dec 11 08:31:43 np0005555077.novalocal sudo[5384]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijlkcugakhcvtvxfifgziulkwwfxplbv ; /usr/bin/python3'
Dec 11 08:31:43 np0005555077.novalocal sudo[5384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:31:44 np0005555077.novalocal python3[5386]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765441902.9854062-31-43498135034557/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:31:44 np0005555077.novalocal sudo[5384]: pam_unix(sudo:session): session closed for user root
Dec 11 08:31:44 np0005555077.novalocal python3[5434]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:31:45 np0005555077.novalocal python3[5458]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:31:45 np0005555077.novalocal python3[5482]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:31:45 np0005555077.novalocal python3[5506]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:31:45 np0005555077.novalocal python3[5530]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:31:46 np0005555077.novalocal python3[5554]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:31:46 np0005555077.novalocal python3[5578]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:31:46 np0005555077.novalocal python3[5602]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:31:46 np0005555077.novalocal python3[5626]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:31:47 np0005555077.novalocal python3[5650]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:31:47 np0005555077.novalocal python3[5674]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:31:47 np0005555077.novalocal python3[5698]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:31:47 np0005555077.novalocal python3[5722]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:31:48 np0005555077.novalocal python3[5746]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:31:48 np0005555077.novalocal python3[5770]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:31:48 np0005555077.novalocal python3[5794]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:31:49 np0005555077.novalocal python3[5818]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:31:49 np0005555077.novalocal python3[5842]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:31:49 np0005555077.novalocal python3[5866]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:31:50 np0005555077.novalocal python3[5890]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:31:50 np0005555077.novalocal python3[5914]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:31:50 np0005555077.novalocal python3[5938]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:31:50 np0005555077.novalocal python3[5962]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:31:51 np0005555077.novalocal python3[5986]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:31:51 np0005555077.novalocal python3[6010]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:31:51 np0005555077.novalocal python3[6034]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:31:55 np0005555077.novalocal sudo[6058]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knvoywgwfbrlaplqqbkkfzfbqaxeqxrn ; /usr/bin/python3'
Dec 11 08:31:55 np0005555077.novalocal sudo[6058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:31:55 np0005555077.novalocal python3[6060]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec 11 08:31:55 np0005555077.novalocal systemd[1]: Starting Time & Date Service...
Dec 11 08:31:55 np0005555077.novalocal systemd[1]: Started Time & Date Service.
Dec 11 08:31:55 np0005555077.novalocal systemd-timedated[6062]: Changed time zone to 'UTC' (UTC).
Dec 11 08:31:55 np0005555077.novalocal sudo[6058]: pam_unix(sudo:session): session closed for user root
Dec 11 08:31:55 np0005555077.novalocal sudo[6089]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzrfcqawosxcvkcnmmnxqmgfiqbdkkkw ; /usr/bin/python3'
Dec 11 08:31:55 np0005555077.novalocal sudo[6089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:31:55 np0005555077.novalocal python3[6091]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:31:55 np0005555077.novalocal sudo[6089]: pam_unix(sudo:session): session closed for user root
Dec 11 08:31:56 np0005555077.novalocal python3[6167]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 11 08:31:56 np0005555077.novalocal python3[6238]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1765441916.1353586-251-85631272830039/source _original_basename=tmp8odq8i5m follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:31:57 np0005555077.novalocal python3[6338]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 11 08:31:57 np0005555077.novalocal python3[6409]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1765441917.0742357-301-212355898186252/source _original_basename=tmpm9lwnyim follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:31:58 np0005555077.novalocal sudo[6509]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agclvdwrlrzzofunfmxapzrzylmwazkk ; /usr/bin/python3'
Dec 11 08:31:58 np0005555077.novalocal sudo[6509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:31:58 np0005555077.novalocal python3[6511]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 11 08:31:58 np0005555077.novalocal sudo[6509]: pam_unix(sudo:session): session closed for user root
Dec 11 08:31:58 np0005555077.novalocal sudo[6582]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fseqqondbemvozdrukphlvnzkkjfvnim ; /usr/bin/python3'
Dec 11 08:31:58 np0005555077.novalocal sudo[6582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:31:58 np0005555077.novalocal python3[6584]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1765441918.307388-381-154946474872513/source _original_basename=tmpbtcgu1i5 follow=False checksum=c1c07ac481f2f30d527e464cfc98c5e1fe086ed6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:31:59 np0005555077.novalocal sudo[6582]: pam_unix(sudo:session): session closed for user root
Dec 11 08:31:59 np0005555077.novalocal python3[6632]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:31:59 np0005555077.novalocal python3[6658]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:32:00 np0005555077.novalocal sudo[6736]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oydkvkzfygubdczilhdqgdiyannocqrn ; /usr/bin/python3'
Dec 11 08:32:00 np0005555077.novalocal sudo[6736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:32:00 np0005555077.novalocal python3[6738]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 11 08:32:00 np0005555077.novalocal sudo[6736]: pam_unix(sudo:session): session closed for user root
Dec 11 08:32:00 np0005555077.novalocal sudo[6809]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkewnllkmuyvxagzhtcfirzrxxuunylz ; /usr/bin/python3'
Dec 11 08:32:00 np0005555077.novalocal sudo[6809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:32:00 np0005555077.novalocal python3[6811]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1765441920.0736518-451-48005774702387/source _original_basename=tmpcfy_p8ow follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:32:00 np0005555077.novalocal sudo[6809]: pam_unix(sudo:session): session closed for user root
Dec 11 08:32:01 np0005555077.novalocal sudo[6860]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdateauogctulipisibifiefuxawkwvv ; /usr/bin/python3'
Dec 11 08:32:01 np0005555077.novalocal sudo[6860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:32:01 np0005555077.novalocal python3[6862]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163efc-24cc-a534-11ee-00000000001f-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:32:01 np0005555077.novalocal sudo[6860]: pam_unix(sudo:session): session closed for user root
Dec 11 08:32:02 np0005555077.novalocal python3[6890]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163efc-24cc-a534-11ee-000000000020-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Dec 11 08:32:03 np0005555077.novalocal python3[6918]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:32:25 np0005555077.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 11 08:32:40 np0005555077.novalocal sudo[6944]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykgrycdslyofemadtfnmjcvltfofgqvx ; /usr/bin/python3'
Dec 11 08:32:40 np0005555077.novalocal sudo[6944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:32:40 np0005555077.novalocal python3[6946]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:32:40 np0005555077.novalocal sudo[6944]: pam_unix(sudo:session): session closed for user root
Dec 11 08:33:22 np0005555077.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec 11 08:33:22 np0005555077.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Dec 11 08:33:22 np0005555077.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Dec 11 08:33:22 np0005555077.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Dec 11 08:33:22 np0005555077.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Dec 11 08:33:22 np0005555077.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Dec 11 08:33:22 np0005555077.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Dec 11 08:33:22 np0005555077.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Dec 11 08:33:22 np0005555077.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Dec 11 08:33:22 np0005555077.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Dec 11 08:33:22 np0005555077.novalocal NetworkManager[864]: <info>  [1765442002.9875] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec 11 08:33:22 np0005555077.novalocal systemd-udevd[6947]: Network interface NamePolicy= disabled on kernel command line.
Dec 11 08:33:23 np0005555077.novalocal NetworkManager[864]: <info>  [1765442003.0046] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 11 08:33:23 np0005555077.novalocal NetworkManager[864]: <info>  [1765442003.0086] settings: (eth1): created default wired connection 'Wired connection 1'
Dec 11 08:33:23 np0005555077.novalocal NetworkManager[864]: <info>  [1765442003.0091] device (eth1): carrier: link connected
Dec 11 08:33:23 np0005555077.novalocal NetworkManager[864]: <info>  [1765442003.0095] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec 11 08:33:23 np0005555077.novalocal NetworkManager[864]: <info>  [1765442003.0103] policy: auto-activating connection 'Wired connection 1' (5a3b4b8d-0dcd-398c-b979-a7322f259c3a)
Dec 11 08:33:23 np0005555077.novalocal NetworkManager[864]: <info>  [1765442003.0107] device (eth1): Activation: starting connection 'Wired connection 1' (5a3b4b8d-0dcd-398c-b979-a7322f259c3a)
Dec 11 08:33:23 np0005555077.novalocal NetworkManager[864]: <info>  [1765442003.0108] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 11 08:33:23 np0005555077.novalocal NetworkManager[864]: <info>  [1765442003.0114] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 11 08:33:23 np0005555077.novalocal NetworkManager[864]: <info>  [1765442003.0120] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 11 08:33:23 np0005555077.novalocal NetworkManager[864]: <info>  [1765442003.0126] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec 11 08:33:23 np0005555077.novalocal systemd[4305]: Starting Mark boot as successful...
Dec 11 08:33:23 np0005555077.novalocal systemd[4305]: Finished Mark boot as successful.
Dec 11 08:33:24 np0005555077.novalocal python3[6975]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163efc-24cc-d2a4-3bf1-000000000128-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:33:33 np0005555077.novalocal sudo[7053]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzadpmxmqsjsfwkocphsfjpelfkygdlc ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 11 08:33:33 np0005555077.novalocal sudo[7053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:33:34 np0005555077.novalocal python3[7055]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 11 08:33:34 np0005555077.novalocal sudo[7053]: pam_unix(sudo:session): session closed for user root
Dec 11 08:33:34 np0005555077.novalocal sudo[7126]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fyeaikrqsbcopwosttzpbvrzzcknarxe ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 11 08:33:34 np0005555077.novalocal sudo[7126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:33:34 np0005555077.novalocal python3[7128]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765442013.7833211-104-27745220195435/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=c5c8c8eb0cecc7be465602a0df35fc180cb2316a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:33:34 np0005555077.novalocal sudo[7126]: pam_unix(sudo:session): session closed for user root
Dec 11 08:33:35 np0005555077.novalocal sudo[7176]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibnldjuwahpfnlxcothomgqmvuifvcet ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 11 08:33:35 np0005555077.novalocal sudo[7176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:33:35 np0005555077.novalocal python3[7178]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 11 08:33:35 np0005555077.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec 11 08:33:35 np0005555077.novalocal systemd[1]: Stopped Network Manager Wait Online.
Dec 11 08:33:35 np0005555077.novalocal systemd[1]: Stopping Network Manager Wait Online...
Dec 11 08:33:35 np0005555077.novalocal systemd[1]: Stopping Network Manager...
Dec 11 08:33:35 np0005555077.novalocal NetworkManager[864]: <info>  [1765442015.3163] caught SIGTERM, shutting down normally.
Dec 11 08:33:35 np0005555077.novalocal NetworkManager[864]: <info>  [1765442015.3178] dhcp4 (eth0): canceled DHCP transaction
Dec 11 08:33:35 np0005555077.novalocal NetworkManager[864]: <info>  [1765442015.3178] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 11 08:33:35 np0005555077.novalocal NetworkManager[864]: <info>  [1765442015.3178] dhcp4 (eth0): state changed no lease
Dec 11 08:33:35 np0005555077.novalocal NetworkManager[864]: <info>  [1765442015.3181] manager: NetworkManager state is now CONNECTING
Dec 11 08:33:35 np0005555077.novalocal NetworkManager[864]: <info>  [1765442015.3384] dhcp4 (eth1): canceled DHCP transaction
Dec 11 08:33:35 np0005555077.novalocal NetworkManager[864]: <info>  [1765442015.3384] dhcp4 (eth1): state changed no lease
Dec 11 08:33:35 np0005555077.novalocal NetworkManager[864]: <info>  [1765442015.3444] exiting (success)
Dec 11 08:33:35 np0005555077.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 11 08:33:35 np0005555077.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 11 08:33:35 np0005555077.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Dec 11 08:33:35 np0005555077.novalocal systemd[1]: Stopped Network Manager.
Dec 11 08:33:35 np0005555077.novalocal systemd[1]: NetworkManager.service: Consumed 1.499s CPU time, 9.9M memory peak.
Dec 11 08:33:35 np0005555077.novalocal systemd[1]: Starting Network Manager...
Dec 11 08:33:35 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442015.4291] NetworkManager (version 1.54.2-1.el9) is starting... (after a restart, boot:13dd1f60-0a56-492c-a25c-280d72789ed1)
Dec 11 08:33:35 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442015.4294] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec 11 08:33:35 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442015.4375] manager[0x562a8c177000]: monitoring kernel firmware directory '/lib/firmware'.
Dec 11 08:33:35 np0005555077.novalocal systemd[1]: Starting Hostname Service...
Dec 11 08:33:35 np0005555077.novalocal systemd[1]: Started Hostname Service.
Dec 11 08:33:35 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442015.5273] hostname: hostname: using hostnamed
Dec 11 08:33:35 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442015.5276] hostname: static hostname changed from (none) to "np0005555077.novalocal"
Dec 11 08:33:35 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442015.5288] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec 11 08:33:35 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442015.5292] manager[0x562a8c177000]: rfkill: Wi-Fi hardware radio set enabled
Dec 11 08:33:35 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442015.5293] manager[0x562a8c177000]: rfkill: WWAN hardware radio set enabled
Dec 11 08:33:35 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442015.5320] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-device-plugin-team.so)
Dec 11 08:33:35 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442015.5320] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec 11 08:33:35 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442015.5321] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec 11 08:33:35 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442015.5321] manager: Networking is enabled by state file
Dec 11 08:33:35 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442015.5323] settings: Loaded settings plugin: keyfile (internal)
Dec 11 08:33:35 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442015.5331] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec 11 08:33:35 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442015.5359] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec 11 08:33:35 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442015.5369] dhcp: init: Using DHCP client 'internal'
Dec 11 08:33:35 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442015.5371] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec 11 08:33:35 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442015.5376] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 11 08:33:35 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442015.5380] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec 11 08:33:35 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442015.5387] device (lo): Activation: starting connection 'lo' (cc03eaf0-9208-4de5-bc26-72936417c77f)
Dec 11 08:33:35 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442015.5392] device (eth0): carrier: link connected
Dec 11 08:33:35 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442015.5396] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec 11 08:33:35 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442015.5399] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec 11 08:33:35 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442015.5400] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 11 08:33:35 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442015.5405] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 11 08:33:35 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442015.5410] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 11 08:33:35 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442015.5415] device (eth1): carrier: link connected
Dec 11 08:33:35 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442015.5419] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec 11 08:33:35 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442015.5422] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (5a3b4b8d-0dcd-398c-b979-a7322f259c3a) (indicated)
Dec 11 08:33:35 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442015.5423] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 11 08:33:35 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442015.5427] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 11 08:33:35 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442015.5433] device (eth1): Activation: starting connection 'Wired connection 1' (5a3b4b8d-0dcd-398c-b979-a7322f259c3a)
Dec 11 08:33:35 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442015.5438] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec 11 08:33:35 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442015.5442] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec 11 08:33:35 np0005555077.novalocal systemd[1]: Started Network Manager.
Dec 11 08:33:35 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442015.5444] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec 11 08:33:35 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442015.5446] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec 11 08:33:35 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442015.5448] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 11 08:33:35 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442015.5450] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 11 08:33:35 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442015.5452] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 11 08:33:35 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442015.5455] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 11 08:33:35 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442015.5457] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec 11 08:33:35 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442015.5463] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 11 08:33:35 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442015.5466] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 11 08:33:35 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442015.5474] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 11 08:33:35 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442015.5477] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec 11 08:33:35 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442015.5494] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec 11 08:33:35 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442015.5495] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec 11 08:33:35 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442015.5499] device (lo): Activation: successful, device activated.
Dec 11 08:33:35 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442015.5503] dhcp4 (eth0): state changed new lease, address=38.102.83.223
Dec 11 08:33:35 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442015.5509] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec 11 08:33:35 np0005555077.novalocal systemd[1]: Starting Network Manager Wait Online...
Dec 11 08:33:35 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442015.5566] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 11 08:33:35 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442015.5629] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 11 08:33:35 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442015.5632] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 11 08:33:35 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442015.5636] manager: NetworkManager state is now CONNECTED_SITE
Dec 11 08:33:35 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442015.5638] device (eth0): Activation: successful, device activated.
Dec 11 08:33:35 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442015.5646] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec 11 08:33:35 np0005555077.novalocal sudo[7176]: pam_unix(sudo:session): session closed for user root
Dec 11 08:33:35 np0005555077.novalocal python3[7263]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163efc-24cc-d2a4-3bf1-0000000000bd-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:33:45 np0005555077.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 11 08:34:05 np0005555077.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 11 08:34:21 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442061.1115] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 11 08:34:21 np0005555077.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 11 08:34:21 np0005555077.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 11 08:34:21 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442061.1755] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 11 08:34:21 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442061.1760] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 11 08:34:21 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442061.1775] device (eth1): Activation: successful, device activated.
Dec 11 08:34:21 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442061.1789] manager: startup complete
Dec 11 08:34:21 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442061.1796] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Dec 11 08:34:21 np0005555077.novalocal NetworkManager[7188]: <warn>  [1765442061.1804] device (eth1): Activation: failed for connection 'Wired connection 1'
Dec 11 08:34:21 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442061.1818] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Dec 11 08:34:21 np0005555077.novalocal systemd[1]: Finished Network Manager Wait Online.
Dec 11 08:34:21 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442061.1945] dhcp4 (eth1): canceled DHCP transaction
Dec 11 08:34:21 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442061.1946] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec 11 08:34:21 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442061.1947] dhcp4 (eth1): state changed no lease
Dec 11 08:34:21 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442061.1985] policy: auto-activating connection 'ci-private-network' (ba5fe1b2-21e2-562b-838b-8e9d781a1880)
Dec 11 08:34:21 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442061.1995] device (eth1): Activation: starting connection 'ci-private-network' (ba5fe1b2-21e2-562b-838b-8e9d781a1880)
Dec 11 08:34:21 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442061.1997] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 11 08:34:21 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442061.2004] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 11 08:34:21 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442061.2020] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 11 08:34:21 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442061.2047] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 11 08:34:21 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442061.2136] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 11 08:34:21 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442061.2141] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 11 08:34:21 np0005555077.novalocal NetworkManager[7188]: <info>  [1765442061.2158] device (eth1): Activation: successful, device activated.
Dec 11 08:34:31 np0005555077.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 11 08:34:35 np0005555077.novalocal sshd-session[4314]: Received disconnect from 38.102.83.114 port 60478:11: disconnected by user
Dec 11 08:34:35 np0005555077.novalocal sshd-session[4314]: Disconnected from user zuul 38.102.83.114 port 60478
Dec 11 08:34:35 np0005555077.novalocal sshd-session[4301]: pam_unix(sshd:session): session closed for user zuul
Dec 11 08:34:35 np0005555077.novalocal systemd-logind[792]: Session 1 logged out. Waiting for processes to exit.
Dec 11 08:35:51 np0005555077.novalocal sshd-session[7291]: Accepted publickey for zuul from 38.102.83.114 port 41374 ssh2: RSA SHA256:Y1EkKFCM2AxcqFrasoatI/7noXQ4Hq5V3b6Fo5AKQhU
Dec 11 08:35:51 np0005555077.novalocal systemd-logind[792]: New session 3 of user zuul.
Dec 11 08:35:51 np0005555077.novalocal systemd[1]: Started Session 3 of User zuul.
Dec 11 08:35:51 np0005555077.novalocal sshd-session[7291]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 11 08:35:51 np0005555077.novalocal sudo[7371]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqrbhzyrkfujuzzifqwhwecfjudzxpmo ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 11 08:35:51 np0005555077.novalocal sudo[7371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:35:51 np0005555077.novalocal python3[7373]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 11 08:35:51 np0005555077.novalocal sudo[7371]: pam_unix(sudo:session): session closed for user root
Dec 11 08:35:51 np0005555077.novalocal sudo[7444]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-soedtbucjeeacxsuubuashmskcrftlwc ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 11 08:35:51 np0005555077.novalocal sudo[7444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:35:51 np0005555077.novalocal python3[7446]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765442151.1729271-373-211293655449199/source _original_basename=tmp5vrjsyqw follow=False checksum=b377fca187e86c4139f736c3bc95363c4dbb7898 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:35:51 np0005555077.novalocal sudo[7444]: pam_unix(sudo:session): session closed for user root
Dec 11 08:35:56 np0005555077.novalocal sshd-session[7294]: Connection closed by 38.102.83.114 port 41374
Dec 11 08:35:56 np0005555077.novalocal sshd-session[7291]: pam_unix(sshd:session): session closed for user zuul
Dec 11 08:35:56 np0005555077.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Dec 11 08:35:56 np0005555077.novalocal systemd-logind[792]: Session 3 logged out. Waiting for processes to exit.
Dec 11 08:35:56 np0005555077.novalocal systemd-logind[792]: Removed session 3.
Dec 11 08:37:07 np0005555077.novalocal systemd[4305]: Created slice User Background Tasks Slice.
Dec 11 08:37:07 np0005555077.novalocal systemd[4305]: Starting Cleanup of User's Temporary Files and Directories...
Dec 11 08:37:08 np0005555077.novalocal systemd[4305]: Finished Cleanup of User's Temporary Files and Directories.
Dec 11 08:42:11 np0005555077.novalocal sshd-session[7478]: Accepted publickey for zuul from 38.102.83.114 port 43278 ssh2: RSA SHA256:Y1EkKFCM2AxcqFrasoatI/7noXQ4Hq5V3b6Fo5AKQhU
Dec 11 08:42:11 np0005555077.novalocal systemd-logind[792]: New session 4 of user zuul.
Dec 11 08:42:11 np0005555077.novalocal systemd[1]: Started Session 4 of User zuul.
Dec 11 08:42:11 np0005555077.novalocal sshd-session[7478]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 11 08:42:11 np0005555077.novalocal sudo[7505]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skmyvfylovmrgbngnyewkjvbqmmikxhc ; /usr/bin/python3'
Dec 11 08:42:11 np0005555077.novalocal sudo[7505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:42:12 np0005555077.novalocal python3[7507]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163efc-24cc-1cc7-554c-000000001f1d-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:42:12 np0005555077.novalocal sudo[7505]: pam_unix(sudo:session): session closed for user root
Dec 11 08:42:12 np0005555077.novalocal sudo[7534]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utecxxvskftygrwqgjxxkndbcqzauwsa ; /usr/bin/python3'
Dec 11 08:42:12 np0005555077.novalocal sudo[7534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:42:13 np0005555077.novalocal python3[7536]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:42:13 np0005555077.novalocal sudo[7534]: pam_unix(sudo:session): session closed for user root
Dec 11 08:42:13 np0005555077.novalocal sudo[7560]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivxscmznujusycpjdvizmmmofwxwpzfg ; /usr/bin/python3'
Dec 11 08:42:13 np0005555077.novalocal sudo[7560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:42:13 np0005555077.novalocal python3[7562]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:42:13 np0005555077.novalocal sudo[7560]: pam_unix(sudo:session): session closed for user root
Dec 11 08:42:13 np0005555077.novalocal sudo[7586]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdvnrrnrqoohkrgfmupumtkojrpoagyg ; /usr/bin/python3'
Dec 11 08:42:13 np0005555077.novalocal sudo[7586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:42:13 np0005555077.novalocal python3[7588]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:42:13 np0005555077.novalocal sudo[7586]: pam_unix(sudo:session): session closed for user root
Dec 11 08:42:13 np0005555077.novalocal sudo[7612]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kostufwgdzeylpbwkbyqmabgrwfdywmf ; /usr/bin/python3'
Dec 11 08:42:13 np0005555077.novalocal sudo[7612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:42:13 np0005555077.novalocal python3[7614]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:42:13 np0005555077.novalocal sudo[7612]: pam_unix(sudo:session): session closed for user root
Dec 11 08:42:14 np0005555077.novalocal sudo[7638]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgsqmtotsacdcrsvgetjrjxtgjtolcub ; /usr/bin/python3'
Dec 11 08:42:14 np0005555077.novalocal sudo[7638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:42:14 np0005555077.novalocal python3[7640]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:42:14 np0005555077.novalocal sudo[7638]: pam_unix(sudo:session): session closed for user root
Dec 11 08:42:14 np0005555077.novalocal sudo[7716]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdcwjximqagjkyywkbwuuxdjchhesvkb ; /usr/bin/python3'
Dec 11 08:42:14 np0005555077.novalocal sudo[7716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:42:14 np0005555077.novalocal python3[7718]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 11 08:42:14 np0005555077.novalocal sudo[7716]: pam_unix(sudo:session): session closed for user root
Dec 11 08:42:14 np0005555077.novalocal sudo[7789]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pirldotfwpkomfkqsayqnzljsmxguqro ; /usr/bin/python3'
Dec 11 08:42:14 np0005555077.novalocal sudo[7789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:42:15 np0005555077.novalocal python3[7791]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765442534.4665034-523-228765924899172/source _original_basename=tmpa8eatiys follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:42:15 np0005555077.novalocal sudo[7789]: pam_unix(sudo:session): session closed for user root
Dec 11 08:42:15 np0005555077.novalocal sudo[7839]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkjfvhhuqldjtvmyequjworrqykpkjio ; /usr/bin/python3'
Dec 11 08:42:15 np0005555077.novalocal sudo[7839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:42:16 np0005555077.novalocal python3[7841]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 11 08:42:16 np0005555077.novalocal systemd[1]: Reloading.
Dec 11 08:42:16 np0005555077.novalocal systemd-rc-local-generator[7861]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:42:16 np0005555077.novalocal sudo[7839]: pam_unix(sudo:session): session closed for user root
Dec 11 08:42:17 np0005555077.novalocal sudo[7895]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnxqhxiesoxhpswnzekpcwdsluwzohzh ; /usr/bin/python3'
Dec 11 08:42:17 np0005555077.novalocal sudo[7895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:42:17 np0005555077.novalocal python3[7897]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Dec 11 08:42:17 np0005555077.novalocal sudo[7895]: pam_unix(sudo:session): session closed for user root
Dec 11 08:42:18 np0005555077.novalocal sudo[7921]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-layjqxzahclizsoeovqidoauvrtfzllc ; /usr/bin/python3'
Dec 11 08:42:18 np0005555077.novalocal sudo[7921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:42:18 np0005555077.novalocal python3[7923]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:42:18 np0005555077.novalocal sudo[7921]: pam_unix(sudo:session): session closed for user root
Dec 11 08:42:18 np0005555077.novalocal sudo[7949]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frxqkvhkloontupddrbytmnmjuvvjsto ; /usr/bin/python3'
Dec 11 08:42:18 np0005555077.novalocal sudo[7949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:42:18 np0005555077.novalocal python3[7951]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:42:18 np0005555077.novalocal sudo[7949]: pam_unix(sudo:session): session closed for user root
Dec 11 08:42:18 np0005555077.novalocal sudo[7977]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgbzdusbjblqswvrepjabqdvduhkcqvu ; /usr/bin/python3'
Dec 11 08:42:18 np0005555077.novalocal sudo[7977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:42:18 np0005555077.novalocal python3[7979]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:42:18 np0005555077.novalocal sudo[7977]: pam_unix(sudo:session): session closed for user root
Dec 11 08:42:19 np0005555077.novalocal sudo[8005]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yalcbtkwlgqzjptxnzezxjkdiiddqezb ; /usr/bin/python3'
Dec 11 08:42:19 np0005555077.novalocal sudo[8005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:42:19 np0005555077.novalocal python3[8007]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:42:19 np0005555077.novalocal sudo[8005]: pam_unix(sudo:session): session closed for user root
Dec 11 08:42:19 np0005555077.novalocal python3[8034]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163efc-24cc-1cc7-554c-000000001f24-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:42:20 np0005555077.novalocal python3[8064]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 11 08:42:23 np0005555077.novalocal sshd-session[7481]: Connection closed by 38.102.83.114 port 43278
Dec 11 08:42:23 np0005555077.novalocal sshd-session[7478]: pam_unix(sshd:session): session closed for user zuul
Dec 11 08:42:23 np0005555077.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Dec 11 08:42:23 np0005555077.novalocal systemd[1]: session-4.scope: Consumed 4.373s CPU time.
Dec 11 08:42:23 np0005555077.novalocal systemd-logind[792]: Session 4 logged out. Waiting for processes to exit.
Dec 11 08:42:23 np0005555077.novalocal systemd-logind[792]: Removed session 4.
Dec 11 08:42:25 np0005555077.novalocal sshd-session[8069]: Accepted publickey for zuul from 38.102.83.114 port 56152 ssh2: RSA SHA256:Y1EkKFCM2AxcqFrasoatI/7noXQ4Hq5V3b6Fo5AKQhU
Dec 11 08:42:25 np0005555077.novalocal systemd-logind[792]: New session 5 of user zuul.
Dec 11 08:42:25 np0005555077.novalocal systemd[1]: Started Session 5 of User zuul.
Dec 11 08:42:25 np0005555077.novalocal sshd-session[8069]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 11 08:42:25 np0005555077.novalocal sudo[8096]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsudylhakluoebdgljcponcqxbtgysvp ; /usr/bin/python3'
Dec 11 08:42:25 np0005555077.novalocal sudo[8096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:42:25 np0005555077.novalocal python3[8098]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 11 08:42:42 np0005555077.novalocal kernel: SELinux:  Converting 385 SID table entries...
Dec 11 08:42:42 np0005555077.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 11 08:42:42 np0005555077.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 11 08:42:42 np0005555077.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 11 08:42:42 np0005555077.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 11 08:42:42 np0005555077.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 11 08:42:42 np0005555077.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 11 08:42:42 np0005555077.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 11 08:42:53 np0005555077.novalocal kernel: SELinux:  Converting 385 SID table entries...
Dec 11 08:42:53 np0005555077.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 11 08:42:53 np0005555077.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 11 08:42:53 np0005555077.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 11 08:42:53 np0005555077.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 11 08:42:53 np0005555077.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 11 08:42:53 np0005555077.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 11 08:42:53 np0005555077.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 11 08:43:03 np0005555077.novalocal kernel: SELinux:  Converting 385 SID table entries...
Dec 11 08:43:03 np0005555077.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 11 08:43:03 np0005555077.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 11 08:43:03 np0005555077.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 11 08:43:03 np0005555077.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 11 08:43:03 np0005555077.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 11 08:43:03 np0005555077.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 11 08:43:03 np0005555077.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 11 08:43:04 np0005555077.novalocal setsebool[8164]: The virt_use_nfs policy boolean was changed to 1 by root
Dec 11 08:43:04 np0005555077.novalocal setsebool[8164]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Dec 11 08:43:16 np0005555077.novalocal kernel: SELinux:  Converting 388 SID table entries...
Dec 11 08:43:16 np0005555077.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 11 08:43:16 np0005555077.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 11 08:43:16 np0005555077.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 11 08:43:16 np0005555077.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 11 08:43:16 np0005555077.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 11 08:43:16 np0005555077.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 11 08:43:16 np0005555077.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 11 08:43:35 np0005555077.novalocal dbus-broker-launch[780]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec 11 08:43:35 np0005555077.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 11 08:43:35 np0005555077.novalocal systemd[1]: Starting man-db-cache-update.service...
Dec 11 08:43:35 np0005555077.novalocal systemd[1]: Reloading.
Dec 11 08:43:35 np0005555077.novalocal systemd-rc-local-generator[8916]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:43:36 np0005555077.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Dec 11 08:43:38 np0005555077.novalocal sudo[8096]: pam_unix(sudo:session): session closed for user root
Dec 11 08:44:02 np0005555077.novalocal python3[22561]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                        _uses_shell=True zuul_log_id=fa163efc-24cc-e635-fda5-00000000000c-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:44:03 np0005555077.novalocal kernel: evm: overlay not supported
Dec 11 08:44:03 np0005555077.novalocal systemd[4305]: Starting D-Bus User Message Bus...
Dec 11 08:44:03 np0005555077.novalocal dbus-broker-launch[23024]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Dec 11 08:44:03 np0005555077.novalocal dbus-broker-launch[23024]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Dec 11 08:44:03 np0005555077.novalocal systemd[4305]: Started D-Bus User Message Bus.
Dec 11 08:44:03 np0005555077.novalocal dbus-broker-lau[23024]: Ready
Dec 11 08:44:03 np0005555077.novalocal systemd[4305]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec 11 08:44:03 np0005555077.novalocal systemd[4305]: Created slice Slice /user.
Dec 11 08:44:03 np0005555077.novalocal systemd[4305]: podman-22945.scope: unit configures an IP firewall, but not running as root.
Dec 11 08:44:03 np0005555077.novalocal systemd[4305]: (This warning is only shown for the first unit using IP firewalling.)
Dec 11 08:44:03 np0005555077.novalocal systemd[4305]: Started podman-22945.scope.
Dec 11 08:44:03 np0005555077.novalocal systemd[4305]: Started podman-pause-b60d9f04.scope.
Dec 11 08:44:05 np0005555077.novalocal sudo[23948]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvfooehrvoydgoxectmmcxwvyckmisqk ; /usr/bin/python3'
Dec 11 08:44:05 np0005555077.novalocal sudo[23948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:44:05 np0005555077.novalocal python3[23965]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                                       location = "38.102.83.162:5001"
                                                       insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                                       location = "38.102.83.162:5001"
                                                       insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:44:05 np0005555077.novalocal python3[23965]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Dec 11 08:44:06 np0005555077.novalocal sudo[23948]: pam_unix(sudo:session): session closed for user root
Dec 11 08:44:06 np0005555077.novalocal sshd-session[8072]: Connection closed by 38.102.83.114 port 56152
Dec 11 08:44:06 np0005555077.novalocal sshd-session[8069]: pam_unix(sshd:session): session closed for user zuul
Dec 11 08:44:06 np0005555077.novalocal systemd[1]: session-5.scope: Deactivated successfully.
Dec 11 08:44:06 np0005555077.novalocal systemd[1]: session-5.scope: Consumed 1min 9.092s CPU time.
Dec 11 08:44:06 np0005555077.novalocal systemd-logind[792]: Session 5 logged out. Waiting for processes to exit.
Dec 11 08:44:06 np0005555077.novalocal systemd-logind[792]: Removed session 5.
Dec 11 08:44:20 np0005555077.novalocal systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 11 08:44:20 np0005555077.novalocal systemd[1]: Finished man-db-cache-update.service.
Dec 11 08:44:20 np0005555077.novalocal systemd[1]: man-db-cache-update.service: Consumed 52.911s CPU time.
Dec 11 08:44:20 np0005555077.novalocal systemd[1]: run-rc2b17265f91d4b86973941e1212fb9fd.service: Deactivated successfully.
Dec 11 08:44:27 np0005555077.novalocal sshd-session[29581]: Connection closed by 38.102.83.179 port 43108 [preauth]
Dec 11 08:44:27 np0005555077.novalocal sshd-session[29583]: Unable to negotiate with 38.102.83.179 port 43118: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Dec 11 08:44:27 np0005555077.novalocal sshd-session[29580]: Unable to negotiate with 38.102.83.179 port 43122: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Dec 11 08:44:27 np0005555077.novalocal sshd-session[29579]: Unable to negotiate with 38.102.83.179 port 43126: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Dec 11 08:44:27 np0005555077.novalocal sshd-session[29582]: Connection closed by 38.102.83.179 port 43104 [preauth]
Dec 11 08:44:33 np0005555077.novalocal sshd-session[29589]: Accepted publickey for zuul from 38.102.83.114 port 39458 ssh2: RSA SHA256:Y1EkKFCM2AxcqFrasoatI/7noXQ4Hq5V3b6Fo5AKQhU
Dec 11 08:44:33 np0005555077.novalocal systemd-logind[792]: New session 6 of user zuul.
Dec 11 08:44:33 np0005555077.novalocal systemd[1]: Started Session 6 of User zuul.
Dec 11 08:44:33 np0005555077.novalocal sshd-session[29589]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 11 08:44:33 np0005555077.novalocal python3[29616]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBstpakOyiBUkVKE8qhLvJSJmnUPKz1ryqhyWx7jyzgwnQhXG4D3sCzq6j9vQt4UHZd7CtghkmU8N5sKq6RWC78= zuul@np0005555076.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:44:34 np0005555077.novalocal sudo[29640]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubrlqcutmzbivrhesttwndeerrwlzhea ; /usr/bin/python3'
Dec 11 08:44:34 np0005555077.novalocal sudo[29640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:44:34 np0005555077.novalocal python3[29642]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBstpakOyiBUkVKE8qhLvJSJmnUPKz1ryqhyWx7jyzgwnQhXG4D3sCzq6j9vQt4UHZd7CtghkmU8N5sKq6RWC78= zuul@np0005555076.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:44:34 np0005555077.novalocal sudo[29640]: pam_unix(sudo:session): session closed for user root
Dec 11 08:44:35 np0005555077.novalocal sudo[29666]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blzeomsqqgzpkooqqboblxezaznuxbgb ; /usr/bin/python3'
Dec 11 08:44:35 np0005555077.novalocal sudo[29666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:44:35 np0005555077.novalocal python3[29668]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005555077.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Dec 11 08:44:35 np0005555077.novalocal useradd[29670]: new group: name=cloud-admin, GID=1002
Dec 11 08:44:35 np0005555077.novalocal useradd[29670]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Dec 11 08:44:35 np0005555077.novalocal sudo[29666]: pam_unix(sudo:session): session closed for user root
Dec 11 08:44:35 np0005555077.novalocal sudo[29700]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubkvlxcctzzdafwbbhzceopenfwjenjq ; /usr/bin/python3'
Dec 11 08:44:35 np0005555077.novalocal sudo[29700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:44:36 np0005555077.novalocal python3[29702]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBstpakOyiBUkVKE8qhLvJSJmnUPKz1ryqhyWx7jyzgwnQhXG4D3sCzq6j9vQt4UHZd7CtghkmU8N5sKq6RWC78= zuul@np0005555076.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:44:36 np0005555077.novalocal sudo[29700]: pam_unix(sudo:session): session closed for user root
Dec 11 08:44:36 np0005555077.novalocal sudo[29778]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhfpopdhxkrwndggmbaqczfticqpzziu ; /usr/bin/python3'
Dec 11 08:44:36 np0005555077.novalocal sudo[29778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:44:36 np0005555077.novalocal python3[29780]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 11 08:44:36 np0005555077.novalocal sudo[29778]: pam_unix(sudo:session): session closed for user root
Dec 11 08:44:36 np0005555077.novalocal sudo[29851]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txuasegopyptcocjnzgfsjpwglasnmzp ; /usr/bin/python3'
Dec 11 08:44:36 np0005555077.novalocal sudo[29851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:44:37 np0005555077.novalocal python3[29853]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1765442676.2162585-167-48483161365457/source _original_basename=tmp_nqsavx8 follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:44:37 np0005555077.novalocal sudo[29851]: pam_unix(sudo:session): session closed for user root
Dec 11 08:44:37 np0005555077.novalocal sudo[29901]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfrsneqfhvnymncahhznfkpyxphtxewa ; /usr/bin/python3'
Dec 11 08:44:37 np0005555077.novalocal sudo[29901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:44:38 np0005555077.novalocal python3[29903]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Dec 11 08:44:38 np0005555077.novalocal systemd[1]: Starting Hostname Service...
Dec 11 08:44:38 np0005555077.novalocal systemd[1]: Started Hostname Service.
Dec 11 08:44:38 np0005555077.novalocal systemd-hostnamed[29907]: Changed pretty hostname to 'compute-0'
Dec 11 08:44:38 compute-0 systemd-hostnamed[29907]: Hostname set to <compute-0> (static)
Dec 11 08:44:38 compute-0 NetworkManager[7188]: <info>  [1765442678.2421] hostname: static hostname changed from "np0005555077.novalocal" to "compute-0"
Dec 11 08:44:38 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 11 08:44:38 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 11 08:44:38 compute-0 sudo[29901]: pam_unix(sudo:session): session closed for user root
Dec 11 08:44:38 compute-0 sshd-session[29592]: Connection closed by 38.102.83.114 port 39458
Dec 11 08:44:38 compute-0 sshd-session[29589]: pam_unix(sshd:session): session closed for user zuul
Dec 11 08:44:38 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Dec 11 08:44:38 compute-0 systemd[1]: session-6.scope: Consumed 2.537s CPU time.
Dec 11 08:44:38 compute-0 systemd-logind[792]: Session 6 logged out. Waiting for processes to exit.
Dec 11 08:44:38 compute-0 systemd-logind[792]: Removed session 6.
Dec 11 08:44:48 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 11 08:45:08 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 11 08:46:07 compute-0 systemd[1]: Starting Cleanup of Temporary Directories...
Dec 11 08:46:08 compute-0 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Dec 11 08:46:08 compute-0 systemd[1]: Finished Cleanup of Temporary Directories.
Dec 11 08:46:08 compute-0 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Dec 11 08:49:47 compute-0 sshd-session[29930]: Accepted publickey for zuul from 38.102.83.179 port 58676 ssh2: RSA SHA256:Y1EkKFCM2AxcqFrasoatI/7noXQ4Hq5V3b6Fo5AKQhU
Dec 11 08:49:47 compute-0 systemd-logind[792]: New session 7 of user zuul.
Dec 11 08:49:47 compute-0 systemd[1]: Started Session 7 of User zuul.
Dec 11 08:49:47 compute-0 sshd-session[29930]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 11 08:49:47 compute-0 python3[30006]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 08:49:49 compute-0 sudo[30120]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjirezcmvuajicqcnofkgcmmvnjswtdp ; /usr/bin/python3'
Dec 11 08:49:49 compute-0 sudo[30120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:49:49 compute-0 python3[30122]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 11 08:49:49 compute-0 sudo[30120]: pam_unix(sudo:session): session closed for user root
Dec 11 08:49:50 compute-0 sudo[30193]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdmlusekzuwkrnaesgpvxvmokyknyuqx ; /usr/bin/python3'
Dec 11 08:49:50 compute-0 sudo[30193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:49:50 compute-0 python3[30195]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765442989.5968113-33955-280470357957281/source mode=0755 _original_basename=delorean.repo follow=False checksum=0f7c85cc67bf467c48edf98d5acc63e62d808324 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:49:50 compute-0 sudo[30193]: pam_unix(sudo:session): session closed for user root
Dec 11 08:49:50 compute-0 sudo[30219]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylvgodgyzelhsywzkrwoogegnexuxeyv ; /usr/bin/python3'
Dec 11 08:49:50 compute-0 sudo[30219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:49:50 compute-0 python3[30221]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 11 08:49:50 compute-0 sudo[30219]: pam_unix(sudo:session): session closed for user root
Dec 11 08:49:50 compute-0 sudo[30292]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqcnasajucpnxgvqqngpwkxwjdxchyak ; /usr/bin/python3'
Dec 11 08:49:50 compute-0 sudo[30292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:49:50 compute-0 python3[30294]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765442989.5968113-33955-280470357957281/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:49:51 compute-0 sudo[30292]: pam_unix(sudo:session): session closed for user root
Dec 11 08:49:51 compute-0 sudo[30318]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-euyabqzecduyurmgdwidjordcvoxxbun ; /usr/bin/python3'
Dec 11 08:49:51 compute-0 sudo[30318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:49:51 compute-0 python3[30320]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 11 08:49:51 compute-0 sudo[30318]: pam_unix(sudo:session): session closed for user root
Dec 11 08:49:51 compute-0 sudo[30391]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyzifxtlatswmssefzaoxaeobjjcycfl ; /usr/bin/python3'
Dec 11 08:49:51 compute-0 sudo[30391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:49:51 compute-0 python3[30393]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765442989.5968113-33955-280470357957281/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:49:51 compute-0 sudo[30391]: pam_unix(sudo:session): session closed for user root
Dec 11 08:49:51 compute-0 sudo[30417]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-euguwbvthezhaottyoilxgctjiijuyjd ; /usr/bin/python3'
Dec 11 08:49:51 compute-0 sudo[30417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:49:51 compute-0 python3[30419]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 11 08:49:51 compute-0 sudo[30417]: pam_unix(sudo:session): session closed for user root
Dec 11 08:49:52 compute-0 sudo[30490]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wycortttnownwfugbetkjjbthlrrzgmh ; /usr/bin/python3'
Dec 11 08:49:52 compute-0 sudo[30490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:49:52 compute-0 python3[30492]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765442989.5968113-33955-280470357957281/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:49:52 compute-0 sudo[30490]: pam_unix(sudo:session): session closed for user root
Dec 11 08:49:52 compute-0 sudo[30516]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwuwdluxmnaczjdfgujnjfpajdsyurha ; /usr/bin/python3'
Dec 11 08:49:52 compute-0 sudo[30516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:49:52 compute-0 python3[30518]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 11 08:49:52 compute-0 sudo[30516]: pam_unix(sudo:session): session closed for user root
Dec 11 08:49:52 compute-0 sudo[30589]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnfqfuvobpowywcqxhifynnsjtufsugd ; /usr/bin/python3'
Dec 11 08:49:52 compute-0 sudo[30589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:49:52 compute-0 python3[30591]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765442989.5968113-33955-280470357957281/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:49:52 compute-0 sudo[30589]: pam_unix(sudo:session): session closed for user root
Dec 11 08:49:52 compute-0 sudo[30615]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwkybfqflisinvtfymooifvjaikajzug ; /usr/bin/python3'
Dec 11 08:49:52 compute-0 sudo[30615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:49:53 compute-0 python3[30617]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 11 08:49:53 compute-0 sudo[30615]: pam_unix(sudo:session): session closed for user root
Dec 11 08:49:53 compute-0 sudo[30688]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdfoueodrixwmqpcpnkymjyepjgrtkoc ; /usr/bin/python3'
Dec 11 08:49:53 compute-0 sudo[30688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:49:53 compute-0 python3[30690]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765442989.5968113-33955-280470357957281/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:49:53 compute-0 sudo[30688]: pam_unix(sudo:session): session closed for user root
Dec 11 08:49:53 compute-0 sudo[30714]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwnahzguqvkidwnjjsrhxnbfgoftruis ; /usr/bin/python3'
Dec 11 08:49:53 compute-0 sudo[30714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:49:53 compute-0 python3[30716]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 11 08:49:53 compute-0 sudo[30714]: pam_unix(sudo:session): session closed for user root
Dec 11 08:49:53 compute-0 sudo[30787]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvpxzhgdsqbvapnjkvoeerqwkposctra ; /usr/bin/python3'
Dec 11 08:49:53 compute-0 sudo[30787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:49:54 compute-0 python3[30789]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765442989.5968113-33955-280470357957281/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=2583a70b3ee76a9837350b0837bc004a8e52405c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:49:54 compute-0 sudo[30787]: pam_unix(sudo:session): session closed for user root
Dec 11 08:49:56 compute-0 sshd-session[30814]: Unable to negotiate with 192.168.122.11 port 56560: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Dec 11 08:49:56 compute-0 sshd-session[30815]: Connection closed by 192.168.122.11 port 56532 [preauth]
Dec 11 08:49:56 compute-0 sshd-session[30818]: Connection closed by 192.168.122.11 port 56520 [preauth]
Dec 11 08:49:56 compute-0 sshd-session[30817]: Unable to negotiate with 192.168.122.11 port 56540: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Dec 11 08:49:56 compute-0 sshd-session[30816]: Unable to negotiate with 192.168.122.11 port 56544: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Dec 11 08:50:06 compute-0 python3[30847]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:55:06 compute-0 sshd-session[29933]: Received disconnect from 38.102.83.179 port 58676:11: disconnected by user
Dec 11 08:55:06 compute-0 sshd-session[29933]: Disconnected from user zuul 38.102.83.179 port 58676
Dec 11 08:55:06 compute-0 sshd-session[29930]: pam_unix(sshd:session): session closed for user zuul
Dec 11 08:55:06 compute-0 systemd[1]: session-7.scope: Deactivated successfully.
Dec 11 08:55:06 compute-0 systemd[1]: session-7.scope: Consumed 5.206s CPU time.
Dec 11 08:55:06 compute-0 systemd-logind[792]: Session 7 logged out. Waiting for processes to exit.
Dec 11 08:55:06 compute-0 systemd-logind[792]: Removed session 7.
Dec 11 09:01:01 compute-0 CROND[30855]: (root) CMD (run-parts /etc/cron.hourly)
Dec 11 09:01:01 compute-0 run-parts[30858]: (/etc/cron.hourly) starting 0anacron
Dec 11 09:01:01 compute-0 anacron[30866]: Anacron started on 2025-12-11
Dec 11 09:01:01 compute-0 anacron[30866]: Will run job `cron.daily' in 32 min.
Dec 11 09:01:01 compute-0 anacron[30866]: Will run job `cron.weekly' in 52 min.
Dec 11 09:01:01 compute-0 anacron[30866]: Will run job `cron.monthly' in 72 min.
Dec 11 09:01:01 compute-0 anacron[30866]: Jobs will be executed sequentially
Dec 11 09:01:01 compute-0 run-parts[30868]: (/etc/cron.hourly) finished 0anacron
Dec 11 09:01:01 compute-0 CROND[30854]: (root) CMDEND (run-parts /etc/cron.hourly)
Dec 11 09:03:00 compute-0 sshd-session[30870]: Accepted publickey for zuul from 192.168.122.30 port 49770 ssh2: ECDSA SHA256:cT48FffNzE3FSGRebchaTzw3hIqcCIoBfXY30Q2C9bc
Dec 11 09:03:00 compute-0 systemd-logind[792]: New session 8 of user zuul.
Dec 11 09:03:00 compute-0 systemd[1]: Started Session 8 of User zuul.
Dec 11 09:03:00 compute-0 sshd-session[30870]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 11 09:03:01 compute-0 python3.9[31023]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 09:03:02 compute-0 sudo[31202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-formxurvyedvircxzhirbcfkagfuaenc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765443782.5201921-56-72112357872483/AnsiballZ_command.py'
Dec 11 09:03:02 compute-0 sudo[31202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:03:03 compute-0 python3.9[31204]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:03:12 compute-0 sudo[31202]: pam_unix(sudo:session): session closed for user root
Dec 11 09:03:12 compute-0 sshd-session[30873]: Connection closed by 192.168.122.30 port 49770
Dec 11 09:03:13 compute-0 sshd-session[30870]: pam_unix(sshd:session): session closed for user zuul
Dec 11 09:03:13 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Dec 11 09:03:13 compute-0 systemd[1]: session-8.scope: Consumed 10.074s CPU time.
Dec 11 09:03:13 compute-0 systemd-logind[792]: Session 8 logged out. Waiting for processes to exit.
Dec 11 09:03:13 compute-0 systemd-logind[792]: Removed session 8.
Dec 11 09:03:29 compute-0 sshd-session[31263]: Accepted publickey for zuul from 192.168.122.30 port 50128 ssh2: ECDSA SHA256:cT48FffNzE3FSGRebchaTzw3hIqcCIoBfXY30Q2C9bc
Dec 11 09:03:29 compute-0 systemd-logind[792]: New session 9 of user zuul.
Dec 11 09:03:29 compute-0 systemd[1]: Started Session 9 of User zuul.
Dec 11 09:03:29 compute-0 sshd-session[31263]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 11 09:03:30 compute-0 python3.9[31416]: ansible-ansible.legacy.ping Invoked with data=pong
Dec 11 09:03:31 compute-0 python3.9[31590]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 09:03:32 compute-0 sudo[31740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyqaeizdailxsdfvhjhnsoinjqsxvbis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765443811.97032-93-91208070692725/AnsiballZ_command.py'
Dec 11 09:03:32 compute-0 sudo[31740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:03:32 compute-0 python3.9[31742]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:03:32 compute-0 sudo[31740]: pam_unix(sudo:session): session closed for user root
Dec 11 09:03:33 compute-0 sudo[31893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqxgvdvmllmplnogegvfdfgfidflnfvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765443813.0701678-129-73205882317605/AnsiballZ_stat.py'
Dec 11 09:03:33 compute-0 sudo[31893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:03:33 compute-0 python3.9[31895]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 11 09:03:33 compute-0 sudo[31893]: pam_unix(sudo:session): session closed for user root
Dec 11 09:03:34 compute-0 sudo[32045]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvamtvtihytufyioiommhmdplgqgqykq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765443813.9222555-153-46833780626095/AnsiballZ_file.py'
Dec 11 09:03:34 compute-0 sudo[32045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:03:34 compute-0 python3.9[32047]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:03:34 compute-0 sudo[32045]: pam_unix(sudo:session): session closed for user root
Dec 11 09:03:35 compute-0 sudo[32197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysggfndbwldjekocxvxwisthvsfatiif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765443814.773411-177-103247491694788/AnsiballZ_stat.py'
Dec 11 09:03:35 compute-0 sudo[32197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:03:35 compute-0 python3.9[32199]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 09:03:35 compute-0 sudo[32197]: pam_unix(sudo:session): session closed for user root
Dec 11 09:03:35 compute-0 sudo[32320]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-buivwukgkbexqrtsqexqsfuseccjovou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765443814.773411-177-103247491694788/AnsiballZ_copy.py'
Dec 11 09:03:35 compute-0 sudo[32320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:03:35 compute-0 python3.9[32322]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1765443814.773411-177-103247491694788/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:03:36 compute-0 sudo[32320]: pam_unix(sudo:session): session closed for user root
Dec 11 09:03:36 compute-0 sudo[32472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrtonapcbkqpcjrrfoujtbwbnbgvuxcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765443816.185429-222-139509841684144/AnsiballZ_setup.py'
Dec 11 09:03:36 compute-0 sudo[32472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:03:36 compute-0 python3.9[32474]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 09:03:37 compute-0 sudo[32472]: pam_unix(sudo:session): session closed for user root
Dec 11 09:03:37 compute-0 sudo[32628]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxrqephmfctdpmqdcagxlupabfevtbbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765443817.2686775-246-16117521102520/AnsiballZ_file.py'
Dec 11 09:03:37 compute-0 sudo[32628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:03:37 compute-0 python3.9[32630]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 11 09:03:37 compute-0 sudo[32628]: pam_unix(sudo:session): session closed for user root
Dec 11 09:03:38 compute-0 sudo[32780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xacicwmmwiifjczbvgonshdluhajfope ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765443818.0114977-273-51894481685180/AnsiballZ_file.py'
Dec 11 09:03:38 compute-0 sudo[32780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:03:38 compute-0 python3.9[32782]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 11 09:03:38 compute-0 sudo[32780]: pam_unix(sudo:session): session closed for user root
Dec 11 09:03:39 compute-0 python3.9[32932]: ansible-ansible.builtin.service_facts Invoked
Dec 11 09:03:43 compute-0 python3.9[33185]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:03:43 compute-0 python3.9[33335]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 09:03:45 compute-0 python3.9[33489]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 09:03:46 compute-0 sudo[33645]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yypliybbmhqnfgkinwodhzlesbvjkafr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765443825.720922-417-180751455504273/AnsiballZ_setup.py'
Dec 11 09:03:46 compute-0 sudo[33645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:03:46 compute-0 python3.9[33647]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 11 09:03:46 compute-0 sudo[33645]: pam_unix(sudo:session): session closed for user root
Dec 11 09:03:47 compute-0 sudo[33729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-quzwkxtbpmpeyhfhdiphpytxpptaewpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765443825.720922-417-180751455504273/AnsiballZ_dnf.py'
Dec 11 09:03:47 compute-0 sudo[33729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:03:47 compute-0 python3.9[33731]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 11 09:04:43 compute-0 systemd[1]: Reloading.
Dec 11 09:04:43 compute-0 systemd-rc-local-generator[33928]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:04:43 compute-0 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Dec 11 09:04:44 compute-0 systemd[1]: Reloading.
Dec 11 09:04:44 compute-0 systemd-rc-local-generator[33966]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:04:44 compute-0 systemd[1]: Starting dnf makecache...
Dec 11 09:04:44 compute-0 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Dec 11 09:04:44 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Dec 11 09:04:44 compute-0 systemd[1]: Reloading.
Dec 11 09:04:44 compute-0 systemd-rc-local-generator[34009]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:04:44 compute-0 dnf[33978]: Failed determining last makecache time.
Dec 11 09:04:44 compute-0 systemd[1]: Listening on LVM2 poll daemon socket.
Dec 11 09:04:44 compute-0 dnf[33978]: delorean-openstack-barbican-42b4c41831408a8e323 114 kB/s | 3.0 kB     00:00
Dec 11 09:04:45 compute-0 dnf[33978]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 169 kB/s | 3.0 kB     00:00
Dec 11 09:04:45 compute-0 dnf[33978]: delorean-openstack-cinder-1c00d6490d88e436f26ef  89 kB/s | 3.0 kB     00:00
Dec 11 09:04:45 compute-0 dnf[33978]: delorean-python-stevedore-c4acc5639fd2329372142 158 kB/s | 3.0 kB     00:00
Dec 11 09:04:45 compute-0 dnf[33978]: delorean-python-cloudkitty-tests-tempest-2c80f8 173 kB/s | 3.0 kB     00:00
Dec 11 09:04:45 compute-0 dnf[33978]: delorean-os-refresh-config-9bfc52b5049be2d8de61 162 kB/s | 3.0 kB     00:00
Dec 11 09:04:45 compute-0 dnf[33978]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 132 kB/s | 3.0 kB     00:00
Dec 11 09:04:45 compute-0 dnf[33978]: delorean-python-designate-tests-tempest-347fdbc 159 kB/s | 3.0 kB     00:00
Dec 11 09:04:45 compute-0 dnf[33978]: delorean-openstack-glance-1fd12c29b339f30fe823e 172 kB/s | 3.0 kB     00:00
Dec 11 09:04:45 compute-0 dnf[33978]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 175 kB/s | 3.0 kB     00:00
Dec 11 09:04:45 compute-0 dnf[33978]: delorean-openstack-manila-3c01b7181572c95dac462 163 kB/s | 3.0 kB     00:00
Dec 11 09:04:45 compute-0 dnf[33978]: delorean-python-whitebox-neutron-tests-tempest- 161 kB/s | 3.0 kB     00:00
Dec 11 09:04:45 compute-0 dnf[33978]: delorean-openstack-octavia-ba397f07a7331190208c 145 kB/s | 3.0 kB     00:00
Dec 11 09:04:45 compute-0 dbus-broker-launch[746]: Noticed file-system modification, trigger reload.
Dec 11 09:04:45 compute-0 dbus-broker-launch[746]: Noticed file-system modification, trigger reload.
Dec 11 09:04:45 compute-0 dnf[33978]: delorean-openstack-watcher-c014f81a8647287f6dcc 154 kB/s | 3.0 kB     00:00
Dec 11 09:04:45 compute-0 dnf[33978]: delorean-ansible-config_template-5ccaa22121a7ff 137 kB/s | 3.0 kB     00:00
Dec 11 09:04:45 compute-0 dnf[33978]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 164 kB/s | 3.0 kB     00:00
Dec 11 09:04:45 compute-0 dnf[33978]: delorean-openstack-swift-dc98a8463506ac520c469a 159 kB/s | 3.0 kB     00:00
Dec 11 09:04:45 compute-0 dnf[33978]: delorean-python-tempestconf-8515371b7cceebd4282 138 kB/s | 3.0 kB     00:00
Dec 11 09:04:45 compute-0 dnf[33978]: delorean-openstack-heat-ui-013accbfd179753bc3f0 165 kB/s | 3.0 kB     00:00
Dec 11 09:04:45 compute-0 dnf[33978]: CentOS Stream 9 - BaseOS                         68 kB/s | 7.0 kB     00:00
Dec 11 09:04:45 compute-0 dnf[33978]: CentOS Stream 9 - AppStream                      31 kB/s | 7.4 kB     00:00
Dec 11 09:04:46 compute-0 dnf[33978]: CentOS Stream 9 - CRB                            29 kB/s | 6.9 kB     00:00
Dec 11 09:04:46 compute-0 dnf[33978]: CentOS Stream 9 - Extras packages                72 kB/s | 8.3 kB     00:00
Dec 11 09:04:46 compute-0 dnf[33978]: dlrn-antelope-testing                           142 kB/s | 3.0 kB     00:00
Dec 11 09:04:46 compute-0 dnf[33978]: dlrn-antelope-build-deps                        154 kB/s | 3.0 kB     00:00
Dec 11 09:04:46 compute-0 dnf[33978]: centos9-rabbitmq                                115 kB/s | 3.0 kB     00:00
Dec 11 09:04:46 compute-0 dnf[33978]: centos9-storage                                 135 kB/s | 3.0 kB     00:00
Dec 11 09:04:46 compute-0 dnf[33978]: centos9-opstools                                131 kB/s | 3.0 kB     00:00
Dec 11 09:04:46 compute-0 dnf[33978]: NFV SIG OpenvSwitch                             120 kB/s | 3.0 kB     00:00
Dec 11 09:04:46 compute-0 dnf[33978]: repo-setup-centos-appstream                     199 kB/s | 4.4 kB     00:00
Dec 11 09:04:46 compute-0 dnf[33978]: repo-setup-centos-baseos                        159 kB/s | 3.9 kB     00:00
Dec 11 09:04:46 compute-0 dnf[33978]: repo-setup-centos-highavailability              154 kB/s | 3.9 kB     00:00
Dec 11 09:04:46 compute-0 dnf[33978]: repo-setup-centos-powertools                    186 kB/s | 4.3 kB     00:00
Dec 11 09:04:47 compute-0 dnf[33978]: Extra Packages for Enterprise Linux 9 - x86_64  220 kB/s |  30 kB     00:00
Dec 11 09:04:47 compute-0 dnf[33978]: Metadata cache created.
Dec 11 09:04:47 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Dec 11 09:04:47 compute-0 systemd[1]: Finished dnf makecache.
Dec 11 09:04:47 compute-0 systemd[1]: dnf-makecache.service: Consumed 2.248s CPU time.
Dec 11 09:05:58 compute-0 kernel: SELinux:  Converting 2719 SID table entries...
Dec 11 09:05:58 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 11 09:05:58 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 11 09:05:58 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 11 09:05:58 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 11 09:05:58 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 11 09:05:58 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 11 09:05:58 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 11 09:05:58 compute-0 dbus-broker-launch[780]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Dec 11 09:05:59 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 11 09:05:59 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 11 09:05:59 compute-0 systemd[1]: Reloading.
Dec 11 09:05:59 compute-0 systemd-rc-local-generator[34399]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:05:59 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 11 09:06:00 compute-0 sudo[33729]: pam_unix(sudo:session): session closed for user root
Dec 11 09:06:00 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 11 09:06:00 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 11 09:06:00 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.410s CPU time.
Dec 11 09:06:00 compute-0 systemd[1]: run-r9d3615a6c6574119a53388fd0c448882.service: Deactivated successfully.
Dec 11 09:06:00 compute-0 sudo[35308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-baxosagehfutafzbzfpwaaiwtiwcpxdd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765443960.1993723-453-152152145461819/AnsiballZ_command.py'
Dec 11 09:06:00 compute-0 sudo[35308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:06:00 compute-0 python3.9[35310]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:06:01 compute-0 sudo[35308]: pam_unix(sudo:session): session closed for user root
Dec 11 09:06:02 compute-0 sudo[35591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjkergtekvluzhbhspvhfafpxaeccpor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765443961.8630505-477-217028333153552/AnsiballZ_selinux.py'
Dec 11 09:06:02 compute-0 sudo[35591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:06:02 compute-0 python3.9[35593]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Dec 11 09:06:02 compute-0 sudo[35591]: pam_unix(sudo:session): session closed for user root
Dec 11 09:06:03 compute-0 sshd-session[35516]: error: maximum authentication attempts exceeded for root from 82.67.140.251 port 53445 ssh2 [preauth]
Dec 11 09:06:03 compute-0 sshd-session[35516]: Disconnecting authenticating user root 82.67.140.251 port 53445: Too many authentication failures [preauth]
Dec 11 09:06:03 compute-0 sudo[35745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-poytltqrqnvdzaoqlgslpladijeelsyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765443963.2064016-510-5104524902942/AnsiballZ_command.py'
Dec 11 09:06:03 compute-0 sudo[35745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:06:03 compute-0 python3.9[35747]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Dec 11 09:06:04 compute-0 sshd-session[35670]: error: maximum authentication attempts exceeded for root from 82.67.140.251 port 53566 ssh2 [preauth]
Dec 11 09:06:04 compute-0 sshd-session[35670]: Disconnecting authenticating user root 82.67.140.251 port 53566: Too many authentication failures [preauth]
Dec 11 09:06:05 compute-0 sshd-session[35749]: error: maximum authentication attempts exceeded for root from 82.67.140.251 port 53715 ssh2 [preauth]
Dec 11 09:06:05 compute-0 sshd-session[35749]: Disconnecting authenticating user root 82.67.140.251 port 53715: Too many authentication failures [preauth]
Dec 11 09:06:06 compute-0 sudo[35745]: pam_unix(sudo:session): session closed for user root
Dec 11 09:06:06 compute-0 sshd-session[35751]: Received disconnect from 82.67.140.251 port 53866:11: disconnected by user [preauth]
Dec 11 09:06:06 compute-0 sshd-session[35751]: Disconnected from authenticating user root 82.67.140.251 port 53866 [preauth]
Dec 11 09:06:06 compute-0 sudo[35904]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aecgzrrlkdvrmztsujblimveoouqxmxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765443966.4191594-534-79692618868897/AnsiballZ_file.py'
Dec 11 09:06:06 compute-0 sudo[35904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:06:07 compute-0 python3.9[35906]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:06:07 compute-0 sudo[35904]: pam_unix(sudo:session): session closed for user root
Dec 11 09:06:07 compute-0 sshd-session[35883]: Invalid user admin from 82.67.140.251 port 53959
Dec 11 09:06:07 compute-0 sshd-session[35883]: error: maximum authentication attempts exceeded for invalid user admin from 82.67.140.251 port 53959 ssh2 [preauth]
Dec 11 09:06:07 compute-0 sshd-session[35883]: Disconnecting invalid user admin 82.67.140.251 port 53959: Too many authentication failures [preauth]
Dec 11 09:06:08 compute-0 sudo[36058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awqxxjtotjcjasrzzaqkfohulfvnnhgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765443967.5592568-558-255826501171554/AnsiballZ_mount.py'
Dec 11 09:06:08 compute-0 sudo[36058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:06:08 compute-0 python3.9[36060]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Dec 11 09:06:08 compute-0 sudo[36058]: pam_unix(sudo:session): session closed for user root
Dec 11 09:06:08 compute-0 sshd-session[36004]: Invalid user admin from 82.67.140.251 port 54104
Dec 11 09:06:08 compute-0 sshd-session[36004]: error: maximum authentication attempts exceeded for invalid user admin from 82.67.140.251 port 54104 ssh2 [preauth]
Dec 11 09:06:08 compute-0 sshd-session[36004]: Disconnecting invalid user admin 82.67.140.251 port 54104: Too many authentication failures [preauth]
Dec 11 09:06:09 compute-0 sshd-session[36085]: Invalid user admin from 82.67.140.251 port 54251
Dec 11 09:06:09 compute-0 sudo[36212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esfrumammxfsyqoktephxyfdftugmtzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765443969.5971-642-42862112424267/AnsiballZ_file.py'
Dec 11 09:06:09 compute-0 sudo[36212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:06:09 compute-0 sshd-session[36085]: Received disconnect from 82.67.140.251 port 54251:11: disconnected by user [preauth]
Dec 11 09:06:09 compute-0 sshd-session[36085]: Disconnected from invalid user admin 82.67.140.251 port 54251 [preauth]
Dec 11 09:06:10 compute-0 python3.9[36214]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 11 09:06:10 compute-0 sudo[36212]: pam_unix(sudo:session): session closed for user root
Dec 11 09:06:10 compute-0 sshd-session[36215]: Invalid user oracle from 82.67.140.251 port 54349
Dec 11 09:06:11 compute-0 sshd-session[36215]: error: maximum authentication attempts exceeded for invalid user oracle from 82.67.140.251 port 54349 ssh2 [preauth]
Dec 11 09:06:11 compute-0 sshd-session[36215]: Disconnecting invalid user oracle 82.67.140.251 port 54349: Too many authentication failures [preauth]
Dec 11 09:06:11 compute-0 sshd-session[36241]: Invalid user oracle from 82.67.140.251 port 54485
Dec 11 09:06:12 compute-0 sshd-session[36241]: error: maximum authentication attempts exceeded for invalid user oracle from 82.67.140.251 port 54485 ssh2 [preauth]
Dec 11 09:06:12 compute-0 sshd-session[36241]: Disconnecting invalid user oracle 82.67.140.251 port 54485: Too many authentication failures [preauth]
Dec 11 09:06:13 compute-0 sshd-session[36243]: Invalid user oracle from 82.67.140.251 port 54625
Dec 11 09:06:13 compute-0 sshd-session[36243]: Received disconnect from 82.67.140.251 port 54625:11: disconnected by user [preauth]
Dec 11 09:06:13 compute-0 sshd-session[36243]: Disconnected from invalid user oracle 82.67.140.251 port 54625 [preauth]
Dec 11 09:06:13 compute-0 sudo[36372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esvqiuhqcjizyxcrvostpqwdnahbroij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765443973.0548582-666-262337104183059/AnsiballZ_stat.py'
Dec 11 09:06:13 compute-0 sudo[36372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:06:13 compute-0 sshd-session[36351]: Invalid user usuario from 82.67.140.251 port 54716
Dec 11 09:06:14 compute-0 python3.9[36374]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 09:06:14 compute-0 sudo[36372]: pam_unix(sudo:session): session closed for user root
Dec 11 09:06:14 compute-0 sshd-session[36351]: error: maximum authentication attempts exceeded for invalid user usuario from 82.67.140.251 port 54716 ssh2 [preauth]
Dec 11 09:06:14 compute-0 sshd-session[36351]: Disconnecting invalid user usuario 82.67.140.251 port 54716: Too many authentication failures [preauth]
Dec 11 09:06:14 compute-0 sudo[36497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iuvkdzzsydmkbqqymyxffowykadzmszk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765443973.0548582-666-262337104183059/AnsiballZ_copy.py'
Dec 11 09:06:14 compute-0 sudo[36497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:06:15 compute-0 sshd-session[36462]: Invalid user usuario from 82.67.140.251 port 54841
Dec 11 09:06:15 compute-0 python3.9[36499]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765443973.0548582-666-262337104183059/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=b8579c206b05c2d6a847310b06d4e3aec15650c5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:06:15 compute-0 sudo[36497]: pam_unix(sudo:session): session closed for user root
Dec 11 09:06:15 compute-0 sshd-session[36462]: error: maximum authentication attempts exceeded for invalid user usuario from 82.67.140.251 port 54841 ssh2 [preauth]
Dec 11 09:06:15 compute-0 sshd-session[36462]: Disconnecting invalid user usuario 82.67.140.251 port 54841: Too many authentication failures [preauth]
Dec 11 09:06:16 compute-0 sshd-session[36524]: Invalid user usuario from 82.67.140.251 port 54974
Dec 11 09:06:16 compute-0 sshd-session[36524]: Received disconnect from 82.67.140.251 port 54974:11: disconnected by user [preauth]
Dec 11 09:06:16 compute-0 sshd-session[36524]: Disconnected from invalid user usuario 82.67.140.251 port 54974 [preauth]
Dec 11 09:06:17 compute-0 sshd-session[36526]: Invalid user test from 82.67.140.251 port 55070
Dec 11 09:06:17 compute-0 sshd-session[36526]: error: maximum authentication attempts exceeded for invalid user test from 82.67.140.251 port 55070 ssh2 [preauth]
Dec 11 09:06:17 compute-0 sshd-session[36526]: Disconnecting invalid user test 82.67.140.251 port 55070: Too many authentication failures [preauth]
Dec 11 09:06:17 compute-0 sudo[36655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jieoswyhwgrnczegkmeklcofiahrozip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765443977.7235825-738-148425998690770/AnsiballZ_stat.py'
Dec 11 09:06:17 compute-0 sudo[36655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:06:18 compute-0 python3.9[36657]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 11 09:06:18 compute-0 sudo[36655]: pam_unix(sudo:session): session closed for user root
Dec 11 09:06:18 compute-0 sshd-session[36580]: Invalid user test from 82.67.140.251 port 55189
Dec 11 09:06:18 compute-0 sudo[36807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtqjqnknznmlfbeddgvykcxibislijkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765443978.385979-762-99791404361229/AnsiballZ_command.py'
Dec 11 09:06:18 compute-0 sudo[36807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:06:18 compute-0 python3.9[36809]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:06:18 compute-0 sshd-session[36580]: error: maximum authentication attempts exceeded for invalid user test from 82.67.140.251 port 55189 ssh2 [preauth]
Dec 11 09:06:18 compute-0 sshd-session[36580]: Disconnecting invalid user test 82.67.140.251 port 55189: Too many authentication failures [preauth]
Dec 11 09:06:18 compute-0 sudo[36807]: pam_unix(sudo:session): session closed for user root
Dec 11 09:06:19 compute-0 sudo[36962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqfsoahyfwaemvalhabweyxgtfvzxpvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765443979.1762266-786-63028269810770/AnsiballZ_file.py'
Dec 11 09:06:19 compute-0 sudo[36962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:06:19 compute-0 sshd-session[36835]: Invalid user test from 82.67.140.251 port 55326
Dec 11 09:06:19 compute-0 python3.9[36964]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:06:19 compute-0 sudo[36962]: pam_unix(sudo:session): session closed for user root
Dec 11 09:06:19 compute-0 sshd-session[36835]: Received disconnect from 82.67.140.251 port 55326:11: disconnected by user [preauth]
Dec 11 09:06:19 compute-0 sshd-session[36835]: Disconnected from invalid user test 82.67.140.251 port 55326 [preauth]
Dec 11 09:06:20 compute-0 sshd-session[36989]: Invalid user user from 82.67.140.251 port 55405
Dec 11 09:06:20 compute-0 sshd-session[36989]: error: maximum authentication attempts exceeded for invalid user user from 82.67.140.251 port 55405 ssh2 [preauth]
Dec 11 09:06:20 compute-0 sshd-session[36989]: Disconnecting invalid user user 82.67.140.251 port 55405: Too many authentication failures [preauth]
Dec 11 09:06:21 compute-0 sudo[37118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pupniqjpnqevuqklsdfjfshywscrbmuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765443980.8213038-819-104614955432467/AnsiballZ_getent.py'
Dec 11 09:06:21 compute-0 sudo[37118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:06:21 compute-0 python3.9[37120]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Dec 11 09:06:21 compute-0 sudo[37118]: pam_unix(sudo:session): session closed for user root
Dec 11 09:06:21 compute-0 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 11 09:06:21 compute-0 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 11 09:06:21 compute-0 sshd-session[37056]: Invalid user user from 82.67.140.251 port 55519
Dec 11 09:06:22 compute-0 sshd-session[37056]: error: maximum authentication attempts exceeded for invalid user user from 82.67.140.251 port 55519 ssh2 [preauth]
Dec 11 09:06:22 compute-0 sshd-session[37056]: Disconnecting invalid user user 82.67.140.251 port 55519: Too many authentication failures [preauth]
Dec 11 09:06:22 compute-0 sudo[37274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxagcpgdurjkwyuejqlhasuptnuyeedb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765443981.814254-843-126480453409230/AnsiballZ_group.py'
Dec 11 09:06:22 compute-0 sudo[37274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:06:22 compute-0 python3.9[37276]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 11 09:06:22 compute-0 groupadd[37277]: group added to /etc/group: name=qemu, GID=107
Dec 11 09:06:22 compute-0 groupadd[37277]: group added to /etc/gshadow: name=qemu
Dec 11 09:06:22 compute-0 groupadd[37277]: new group: name=qemu, GID=107
Dec 11 09:06:22 compute-0 sudo[37274]: pam_unix(sudo:session): session closed for user root
Dec 11 09:06:22 compute-0 sshd-session[37255]: Invalid user user from 82.67.140.251 port 55623
Dec 11 09:06:23 compute-0 sshd-session[37255]: Received disconnect from 82.67.140.251 port 55623:11: disconnected by user [preauth]
Dec 11 09:06:23 compute-0 sshd-session[37255]: Disconnected from invalid user user 82.67.140.251 port 55623 [preauth]
Dec 11 09:06:23 compute-0 sudo[37432]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajdzektbtbtvwhyiktyzupaknwrxhioz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765443982.8339145-867-165653701185146/AnsiballZ_user.py'
Dec 11 09:06:23 compute-0 sudo[37432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:06:23 compute-0 python3.9[37436]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 11 09:06:23 compute-0 useradd[37438]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Dec 11 09:06:23 compute-0 sudo[37432]: pam_unix(sudo:session): session closed for user root
Dec 11 09:06:23 compute-0 sshd-session[37434]: Invalid user ftpuser from 82.67.140.251 port 55761
Dec 11 09:06:24 compute-0 sudo[37594]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shxwhttyzlddgeyecxyhxouzouamltqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765443984.0175908-891-257184705647760/AnsiballZ_getent.py'
Dec 11 09:06:24 compute-0 sudo[37594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:06:24 compute-0 sshd-session[37434]: error: maximum authentication attempts exceeded for invalid user ftpuser from 82.67.140.251 port 55761 ssh2 [preauth]
Dec 11 09:06:24 compute-0 sshd-session[37434]: Disconnecting invalid user ftpuser 82.67.140.251 port 55761: Too many authentication failures [preauth]
Dec 11 09:06:24 compute-0 python3.9[37596]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Dec 11 09:06:24 compute-0 sudo[37594]: pam_unix(sudo:session): session closed for user root
Dec 11 09:06:24 compute-0 sudo[37749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlvrmjftozejhrhocroeyuqarhctvsmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765443984.7403097-915-238829898788502/AnsiballZ_group.py'
Dec 11 09:06:24 compute-0 sudo[37749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:06:25 compute-0 sshd-session[37622]: Invalid user ftpuser from 82.67.140.251 port 55861
Dec 11 09:06:25 compute-0 python3.9[37751]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 11 09:06:25 compute-0 groupadd[37752]: group added to /etc/group: name=hugetlbfs, GID=42477
Dec 11 09:06:25 compute-0 groupadd[37752]: group added to /etc/gshadow: name=hugetlbfs
Dec 11 09:06:25 compute-0 groupadd[37752]: new group: name=hugetlbfs, GID=42477
Dec 11 09:06:25 compute-0 sudo[37749]: pam_unix(sudo:session): session closed for user root
Dec 11 09:06:25 compute-0 sshd-session[37622]: error: maximum authentication attempts exceeded for invalid user ftpuser from 82.67.140.251 port 55861 ssh2 [preauth]
Dec 11 09:06:25 compute-0 sshd-session[37622]: Disconnecting invalid user ftpuser 82.67.140.251 port 55861: Too many authentication failures [preauth]
Dec 11 09:06:26 compute-0 sudo[37909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfnvvgfvxhdaowzmoxkxuuaphhxuctdd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765443985.7533178-942-80951519396452/AnsiballZ_file.py'
Dec 11 09:06:26 compute-0 sudo[37909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:06:26 compute-0 python3.9[37911]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Dec 11 09:06:26 compute-0 sshd-session[37805]: Invalid user ftpuser from 82.67.140.251 port 55975
Dec 11 09:06:26 compute-0 sudo[37909]: pam_unix(sudo:session): session closed for user root
Dec 11 09:06:26 compute-0 sshd-session[37805]: Received disconnect from 82.67.140.251 port 55975:11: disconnected by user [preauth]
Dec 11 09:06:26 compute-0 sshd-session[37805]: Disconnected from invalid user ftpuser 82.67.140.251 port 55975 [preauth]
Dec 11 09:06:27 compute-0 sudo[38063]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dypqzpytcbtqcqzadgiysjohrbyxnmns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765443986.7742841-975-239172470042518/AnsiballZ_dnf.py'
Dec 11 09:06:27 compute-0 sudo[38063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:06:27 compute-0 sshd-session[37941]: Invalid user test1 from 82.67.140.251 port 56079
Dec 11 09:06:27 compute-0 python3.9[38065]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 11 09:06:27 compute-0 sshd-session[37941]: error: maximum authentication attempts exceeded for invalid user test1 from 82.67.140.251 port 56079 ssh2 [preauth]
Dec 11 09:06:27 compute-0 sshd-session[37941]: Disconnecting invalid user test1 82.67.140.251 port 56079: Too many authentication failures [preauth]
Dec 11 09:06:28 compute-0 sshd-session[38067]: Invalid user test1 from 82.67.140.251 port 56225
Dec 11 09:06:29 compute-0 sshd-session[38067]: error: maximum authentication attempts exceeded for invalid user test1 from 82.67.140.251 port 56225 ssh2 [preauth]
Dec 11 09:06:29 compute-0 sshd-session[38067]: Disconnecting invalid user test1 82.67.140.251 port 56225: Too many authentication failures [preauth]
Dec 11 09:06:30 compute-0 sshd-session[38069]: Invalid user test1 from 82.67.140.251 port 56403
Dec 11 09:06:30 compute-0 sshd-session[38069]: Received disconnect from 82.67.140.251 port 56403:11: disconnected by user [preauth]
Dec 11 09:06:30 compute-0 sshd-session[38069]: Disconnected from invalid user test1 82.67.140.251 port 56403 [preauth]
Dec 11 09:06:30 compute-0 sudo[38063]: pam_unix(sudo:session): session closed for user root
Dec 11 09:06:30 compute-0 sudo[38223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqinniylitrmanqvqyrjxldbauktrsep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765443990.5539339-999-256426451209643/AnsiballZ_file.py'
Dec 11 09:06:30 compute-0 sudo[38223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:06:30 compute-0 sshd-session[38075]: Invalid user test2 from 82.67.140.251 port 56501
Dec 11 09:06:30 compute-0 python3.9[38225]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 11 09:06:31 compute-0 sudo[38223]: pam_unix(sudo:session): session closed for user root
Dec 11 09:06:31 compute-0 sshd-session[38075]: error: maximum authentication attempts exceeded for invalid user test2 from 82.67.140.251 port 56501 ssh2 [preauth]
Dec 11 09:06:31 compute-0 sshd-session[38075]: Disconnecting invalid user test2 82.67.140.251 port 56501: Too many authentication failures [preauth]
Dec 11 09:06:31 compute-0 sudo[38375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itupkaengotvmnlvuxyutqnwjcmpqatv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765443991.2467234-1023-233926814739623/AnsiballZ_stat.py'
Dec 11 09:06:31 compute-0 sudo[38375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:06:31 compute-0 python3.9[38377]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 09:06:31 compute-0 sudo[38375]: pam_unix(sudo:session): session closed for user root
Dec 11 09:06:32 compute-0 sshd-session[38378]: Invalid user test2 from 82.67.140.251 port 56648
Dec 11 09:06:32 compute-0 sudo[38500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyasaixtbcdgzhoihnuznxrgegaztagl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765443991.2467234-1023-233926814739623/AnsiballZ_copy.py'
Dec 11 09:06:32 compute-0 sudo[38500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:06:32 compute-0 python3.9[38502]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765443991.2467234-1023-233926814739623/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 11 09:06:32 compute-0 sudo[38500]: pam_unix(sudo:session): session closed for user root
Dec 11 09:06:32 compute-0 sshd-session[38378]: error: maximum authentication attempts exceeded for invalid user test2 from 82.67.140.251 port 56648 ssh2 [preauth]
Dec 11 09:06:32 compute-0 sshd-session[38378]: Disconnecting invalid user test2 82.67.140.251 port 56648: Too many authentication failures [preauth]
Dec 11 09:06:33 compute-0 sudo[38654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oquribwxjnycctfurrszaqcotefgykuw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765443992.6822698-1068-57535311606432/AnsiballZ_systemd.py'
Dec 11 09:06:33 compute-0 sudo[38654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:06:33 compute-0 sshd-session[38579]: Invalid user test2 from 82.67.140.251 port 56790
Dec 11 09:06:33 compute-0 sshd-session[38579]: Received disconnect from 82.67.140.251 port 56790:11: disconnected by user [preauth]
Dec 11 09:06:33 compute-0 sshd-session[38579]: Disconnected from invalid user test2 82.67.140.251 port 56790 [preauth]
Dec 11 09:06:33 compute-0 python3.9[38656]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 11 09:06:33 compute-0 systemd[1]: Starting Load Kernel Modules...
Dec 11 09:06:33 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Dec 11 09:06:33 compute-0 kernel: Bridge firewalling registered
Dec 11 09:06:33 compute-0 systemd-modules-load[38660]: Inserted module 'br_netfilter'
Dec 11 09:06:33 compute-0 systemd[1]: Finished Load Kernel Modules.
Dec 11 09:06:33 compute-0 sudo[38654]: pam_unix(sudo:session): session closed for user root
Dec 11 09:06:34 compute-0 sshd-session[38662]: Invalid user ubuntu from 82.67.140.251 port 56867
Dec 11 09:06:34 compute-0 sudo[38816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmeavlgywmeuqlyfcwgnvrvwwntdtwfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765443993.9026058-1092-111090828996153/AnsiballZ_stat.py'
Dec 11 09:06:34 compute-0 sudo[38816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:06:34 compute-0 python3.9[38818]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 09:06:34 compute-0 sudo[38816]: pam_unix(sudo:session): session closed for user root
Dec 11 09:06:34 compute-0 sshd-session[38662]: error: maximum authentication attempts exceeded for invalid user ubuntu from 82.67.140.251 port 56867 ssh2 [preauth]
Dec 11 09:06:34 compute-0 sshd-session[38662]: Disconnecting invalid user ubuntu 82.67.140.251 port 56867: Too many authentication failures [preauth]
Dec 11 09:06:34 compute-0 sudo[38939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwaqrmkkvaqsxhkpxtkjortvjyntvtnz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765443993.9026058-1092-111090828996153/AnsiballZ_copy.py'
Dec 11 09:06:34 compute-0 sudo[38939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:06:34 compute-0 python3.9[38941]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765443993.9026058-1092-111090828996153/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 11 09:06:34 compute-0 sudo[38939]: pam_unix(sudo:session): session closed for user root
Dec 11 09:06:35 compute-0 sshd-session[38942]: Invalid user ubuntu from 82.67.140.251 port 57010
Dec 11 09:06:35 compute-0 sudo[39093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eijggpheencluqnmsqtbewawnkbftbzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765443995.3655362-1146-72249747703323/AnsiballZ_dnf.py'
Dec 11 09:06:35 compute-0 sudo[39093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:06:35 compute-0 sshd-session[38942]: error: maximum authentication attempts exceeded for invalid user ubuntu from 82.67.140.251 port 57010 ssh2 [preauth]
Dec 11 09:06:35 compute-0 sshd-session[38942]: Disconnecting invalid user ubuntu 82.67.140.251 port 57010: Too many authentication failures [preauth]
Dec 11 09:06:35 compute-0 python3.9[39095]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 11 09:06:36 compute-0 sshd-session[39097]: Invalid user ubuntu from 82.67.140.251 port 57134
Dec 11 09:06:36 compute-0 sshd-session[39097]: Received disconnect from 82.67.140.251 port 57134:11: disconnected by user [preauth]
Dec 11 09:06:36 compute-0 sshd-session[39097]: Disconnected from invalid user ubuntu 82.67.140.251 port 57134 [preauth]
Dec 11 09:06:37 compute-0 sshd-session[39099]: Invalid user pi from 82.67.140.251 port 57233
Dec 11 09:06:37 compute-0 sshd-session[39099]: Received disconnect from 82.67.140.251 port 57233:11: disconnected by user [preauth]
Dec 11 09:06:37 compute-0 sshd-session[39099]: Disconnected from invalid user pi 82.67.140.251 port 57233 [preauth]
Dec 11 09:06:38 compute-0 sshd-session[39103]: Invalid user baikal from 82.67.140.251 port 57340
Dec 11 09:06:38 compute-0 sshd-session[39103]: Received disconnect from 82.67.140.251 port 57340:11: disconnected by user [preauth]
Dec 11 09:06:38 compute-0 sshd-session[39103]: Disconnected from invalid user baikal 82.67.140.251 port 57340 [preauth]
Dec 11 09:06:40 compute-0 dbus-broker-launch[746]: Noticed file-system modification, trigger reload.
Dec 11 09:06:40 compute-0 dbus-broker-launch[746]: Noticed file-system modification, trigger reload.
Dec 11 09:06:40 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 11 09:06:40 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 11 09:06:40 compute-0 systemd[1]: Reloading.
Dec 11 09:06:40 compute-0 systemd-rc-local-generator[39160]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:06:40 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 11 09:06:41 compute-0 sudo[39093]: pam_unix(sudo:session): session closed for user root
Dec 11 09:06:42 compute-0 python3.9[40621]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 11 09:06:43 compute-0 python3.9[41533]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Dec 11 09:06:43 compute-0 python3.9[42319]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 11 09:06:44 compute-0 sudo[43214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkimjsrxpdzqvcvfckrupddqimgbgyfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444004.1986856-1263-25262693650120/AnsiballZ_command.py'
Dec 11 09:06:44 compute-0 sudo[43214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:06:44 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 11 09:06:44 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 11 09:06:44 compute-0 systemd[1]: man-db-cache-update.service: Consumed 5.009s CPU time.
Dec 11 09:06:44 compute-0 systemd[1]: run-raca7fdc344f841baa32a2a2fbf444f05.service: Deactivated successfully.
Dec 11 09:06:44 compute-0 python3.9[43236]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:06:44 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec 11 09:06:45 compute-0 systemd[1]: Starting Authorization Manager...
Dec 11 09:06:45 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Dec 11 09:06:45 compute-0 polkitd[43489]: Started polkitd version 0.117
Dec 11 09:06:45 compute-0 polkitd[43489]: Loading rules from directory /etc/polkit-1/rules.d
Dec 11 09:06:45 compute-0 polkitd[43489]: Loading rules from directory /usr/share/polkit-1/rules.d
Dec 11 09:06:45 compute-0 polkitd[43489]: Finished loading, compiling and executing 2 rules
Dec 11 09:06:45 compute-0 systemd[1]: Started Authorization Manager.
Dec 11 09:06:45 compute-0 polkitd[43489]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Dec 11 09:06:45 compute-0 sudo[43214]: pam_unix(sudo:session): session closed for user root
Dec 11 09:06:45 compute-0 sudo[43657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dslruybfneutyrygvhzsurijphdyaiua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444005.6209517-1290-148452833270382/AnsiballZ_systemd.py'
Dec 11 09:06:45 compute-0 sudo[43657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:06:46 compute-0 python3.9[43659]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 11 09:06:46 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Dec 11 09:06:46 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Dec 11 09:06:46 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Dec 11 09:06:46 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec 11 09:06:46 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Dec 11 09:06:46 compute-0 sudo[43657]: pam_unix(sudo:session): session closed for user root
Dec 11 09:06:47 compute-0 python3.9[43820]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Dec 11 09:06:50 compute-0 sudo[43970]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksrbfvajzdcyxlevambgzuagyyjkeeic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444010.538002-1461-14749584838387/AnsiballZ_systemd.py'
Dec 11 09:06:50 compute-0 sudo[43970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:06:51 compute-0 python3.9[43972]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 11 09:06:51 compute-0 systemd[1]: Reloading.
Dec 11 09:06:51 compute-0 systemd-rc-local-generator[44001]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:06:51 compute-0 sudo[43970]: pam_unix(sudo:session): session closed for user root
Dec 11 09:06:51 compute-0 sudo[44158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjzarhfteiyxudfrbietquwwoakhbfqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444011.5526986-1461-45927158027/AnsiballZ_systemd.py'
Dec 11 09:06:51 compute-0 sudo[44158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:06:52 compute-0 python3.9[44160]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 11 09:06:52 compute-0 systemd[1]: Reloading.
Dec 11 09:06:52 compute-0 systemd-rc-local-generator[44189]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:06:52 compute-0 sudo[44158]: pam_unix(sudo:session): session closed for user root
Dec 11 09:06:53 compute-0 sudo[44347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqqluizvzbhexrjrsiddwzoojafkiizr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444012.7689059-1509-36491298345234/AnsiballZ_command.py'
Dec 11 09:06:53 compute-0 sudo[44347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:06:53 compute-0 python3.9[44349]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:06:53 compute-0 sudo[44347]: pam_unix(sudo:session): session closed for user root
Dec 11 09:06:54 compute-0 sudo[44500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knihoifkeklzgbzzjdruvhjzomlfelew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444013.7997222-1533-103246171206568/AnsiballZ_command.py'
Dec 11 09:06:54 compute-0 sudo[44500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:06:54 compute-0 python3.9[44502]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:06:54 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Dec 11 09:06:54 compute-0 sudo[44500]: pam_unix(sudo:session): session closed for user root
Dec 11 09:06:54 compute-0 sudo[44653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttoovxtogdzurbuhncxodiifxqnsybko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444014.4913065-1557-10878858110399/AnsiballZ_command.py'
Dec 11 09:06:54 compute-0 sudo[44653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:06:54 compute-0 python3.9[44655]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:06:56 compute-0 sudo[44653]: pam_unix(sudo:session): session closed for user root
Dec 11 09:06:57 compute-0 sudo[44815]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjfhztdhxvzjutpvsexilwspgdsbizhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444016.883704-1581-251503090959102/AnsiballZ_command.py'
Dec 11 09:06:57 compute-0 sudo[44815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:06:57 compute-0 python3.9[44817]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:06:57 compute-0 sudo[44815]: pam_unix(sudo:session): session closed for user root
Dec 11 09:06:57 compute-0 sudo[44968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqkwcyayylrverxavgahbqjmrzrhemde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444017.5534642-1605-247434721447732/AnsiballZ_systemd.py'
Dec 11 09:06:57 compute-0 sudo[44968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:06:58 compute-0 python3.9[44970]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 11 09:06:58 compute-0 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec 11 09:06:58 compute-0 systemd[1]: Stopped Apply Kernel Variables.
Dec 11 09:06:58 compute-0 systemd[1]: Stopping Apply Kernel Variables...
Dec 11 09:06:58 compute-0 systemd[1]: Starting Apply Kernel Variables...
Dec 11 09:06:58 compute-0 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec 11 09:06:58 compute-0 systemd[1]: Finished Apply Kernel Variables.
Dec 11 09:06:58 compute-0 sudo[44968]: pam_unix(sudo:session): session closed for user root
Dec 11 09:06:58 compute-0 sshd-session[31266]: Connection closed by 192.168.122.30 port 50128
Dec 11 09:06:58 compute-0 sshd-session[31263]: pam_unix(sshd:session): session closed for user zuul
Dec 11 09:06:58 compute-0 systemd-logind[792]: Session 9 logged out. Waiting for processes to exit.
Dec 11 09:06:58 compute-0 systemd[1]: session-9.scope: Deactivated successfully.
Dec 11 09:06:58 compute-0 systemd[1]: session-9.scope: Consumed 2min 41.298s CPU time.
Dec 11 09:06:58 compute-0 systemd-logind[792]: Removed session 9.
Dec 11 09:07:04 compute-0 sshd-session[45000]: Accepted publickey for zuul from 192.168.122.30 port 40244 ssh2: ECDSA SHA256:cT48FffNzE3FSGRebchaTzw3hIqcCIoBfXY30Q2C9bc
Dec 11 09:07:04 compute-0 systemd-logind[792]: New session 10 of user zuul.
Dec 11 09:07:04 compute-0 systemd[1]: Started Session 10 of User zuul.
Dec 11 09:07:04 compute-0 sshd-session[45000]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 11 09:07:05 compute-0 python3.9[45153]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 09:07:06 compute-0 sudo[45307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyqpxrfkeklpwgwosavcvqnhsljfdyez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444026.4029894-68-61125826048651/AnsiballZ_getent.py'
Dec 11 09:07:06 compute-0 sudo[45307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:07:07 compute-0 python3.9[45309]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Dec 11 09:07:07 compute-0 sudo[45307]: pam_unix(sudo:session): session closed for user root
Dec 11 09:07:07 compute-0 sudo[45460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wogyowagwyfhbsewdiqsyxusxvhqfiki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444027.28518-92-213805184152087/AnsiballZ_group.py'
Dec 11 09:07:07 compute-0 sudo[45460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:07:07 compute-0 python3.9[45462]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 11 09:07:07 compute-0 groupadd[45463]: group added to /etc/group: name=openvswitch, GID=42476
Dec 11 09:07:07 compute-0 groupadd[45463]: group added to /etc/gshadow: name=openvswitch
Dec 11 09:07:07 compute-0 groupadd[45463]: new group: name=openvswitch, GID=42476
Dec 11 09:07:07 compute-0 sudo[45460]: pam_unix(sudo:session): session closed for user root
Dec 11 09:07:08 compute-0 sudo[45618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktdrifqoysgwiawnrolgzfymscxgyhqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444028.27339-116-38746714707481/AnsiballZ_user.py'
Dec 11 09:07:08 compute-0 sudo[45618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:07:08 compute-0 python3.9[45620]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 11 09:07:08 compute-0 useradd[45622]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Dec 11 09:07:08 compute-0 useradd[45622]: add 'openvswitch' to group 'hugetlbfs'
Dec 11 09:07:08 compute-0 useradd[45622]: add 'openvswitch' to shadow group 'hugetlbfs'
Dec 11 09:07:09 compute-0 sudo[45618]: pam_unix(sudo:session): session closed for user root
Dec 11 09:07:09 compute-0 sudo[45778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rflrqecxrqcfsrqehzzswoupmkmwsqej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444029.4605346-146-130323866413563/AnsiballZ_setup.py'
Dec 11 09:07:09 compute-0 sudo[45778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:07:10 compute-0 python3.9[45780]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 11 09:07:10 compute-0 sudo[45778]: pam_unix(sudo:session): session closed for user root
Dec 11 09:07:10 compute-0 sudo[45862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-limkkmqbxhxcvcihiwkgmfllfstwywbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444029.4605346-146-130323866413563/AnsiballZ_dnf.py'
Dec 11 09:07:10 compute-0 sudo[45862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:07:11 compute-0 python3.9[45864]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 11 09:07:14 compute-0 sudo[45862]: pam_unix(sudo:session): session closed for user root
Dec 11 09:07:14 compute-0 sudo[46029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odyayrlzlmsrigitjzylcpontzfjxqty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444034.5641901-188-102409779030342/AnsiballZ_dnf.py'
Dec 11 09:07:14 compute-0 sudo[46029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:07:15 compute-0 python3.9[46031]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 11 09:07:29 compute-0 kernel: SELinux:  Converting 2731 SID table entries...
Dec 11 09:07:29 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 11 09:07:29 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 11 09:07:29 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 11 09:07:29 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 11 09:07:29 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 11 09:07:29 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 11 09:07:29 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 11 09:07:29 compute-0 groupadd[46054]: group added to /etc/group: name=unbound, GID=993
Dec 11 09:07:29 compute-0 groupadd[46054]: group added to /etc/gshadow: name=unbound
Dec 11 09:07:29 compute-0 groupadd[46054]: new group: name=unbound, GID=993
Dec 11 09:07:29 compute-0 useradd[46061]: new user: name=unbound, UID=993, GID=993, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Dec 11 09:07:29 compute-0 dbus-broker-launch[780]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Dec 11 09:07:29 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Dec 11 09:07:31 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 11 09:07:31 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 11 09:07:31 compute-0 systemd[1]: Reloading.
Dec 11 09:07:31 compute-0 systemd-sysv-generator[46561]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:07:31 compute-0 systemd-rc-local-generator[46558]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:07:31 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 11 09:07:32 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 11 09:07:32 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 11 09:07:32 compute-0 systemd[1]: run-r4e423aad3cd948159f8e55474db224a8.service: Deactivated successfully.
Dec 11 09:07:32 compute-0 sudo[46029]: pam_unix(sudo:session): session closed for user root
Dec 11 09:07:33 compute-0 sudo[47126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwmnympvoilcsutyoipwroaskenxsisr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444053.1925013-212-168913516061072/AnsiballZ_systemd.py'
Dec 11 09:07:33 compute-0 sudo[47126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:07:34 compute-0 python3.9[47128]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 11 09:07:34 compute-0 systemd[1]: Reloading.
Dec 11 09:07:34 compute-0 systemd-rc-local-generator[47155]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:07:34 compute-0 systemd-sysv-generator[47162]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:07:34 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Dec 11 09:07:34 compute-0 chown[47170]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Dec 11 09:07:34 compute-0 ovs-ctl[47175]: /etc/openvswitch/conf.db does not exist ... (warning).
Dec 11 09:07:34 compute-0 ovs-ctl[47175]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Dec 11 09:07:34 compute-0 ovs-ctl[47175]: Starting ovsdb-server [  OK  ]
Dec 11 09:07:34 compute-0 ovs-vsctl[47224]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Dec 11 09:07:34 compute-0 ovs-vsctl[47244]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"ec9a39e7-d4e2-4b13-b3d3-7357fc123997\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Dec 11 09:07:34 compute-0 ovs-ctl[47175]: Configuring Open vSwitch system IDs [  OK  ]
Dec 11 09:07:34 compute-0 ovs-vsctl[47250]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Dec 11 09:07:34 compute-0 ovs-ctl[47175]: Enabling remote OVSDB managers [  OK  ]
Dec 11 09:07:34 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Dec 11 09:07:34 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Dec 11 09:07:34 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Dec 11 09:07:34 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Dec 11 09:07:35 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Dec 11 09:07:35 compute-0 ovs-ctl[47295]: Inserting openvswitch module [  OK  ]
Dec 11 09:07:35 compute-0 ovs-ctl[47264]: Starting ovs-vswitchd [  OK  ]
Dec 11 09:07:35 compute-0 ovs-vsctl[47312]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Dec 11 09:07:35 compute-0 ovs-ctl[47264]: Enabling remote OVSDB managers [  OK  ]
Dec 11 09:07:35 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Dec 11 09:07:35 compute-0 systemd[1]: Starting Open vSwitch...
Dec 11 09:07:35 compute-0 systemd[1]: Finished Open vSwitch.
Dec 11 09:07:35 compute-0 sudo[47126]: pam_unix(sudo:session): session closed for user root
Dec 11 09:07:36 compute-0 python3.9[47464]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 09:07:37 compute-0 sudo[47614]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pknvraepchqrokqywdhxcshqwedrwyni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444056.5845876-266-249812673622062/AnsiballZ_sefcontext.py'
Dec 11 09:07:37 compute-0 sudo[47614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:07:37 compute-0 python3.9[47616]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Dec 11 09:07:38 compute-0 kernel: SELinux:  Converting 2745 SID table entries...
Dec 11 09:07:38 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 11 09:07:38 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 11 09:07:38 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 11 09:07:38 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 11 09:07:38 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 11 09:07:38 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 11 09:07:38 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 11 09:07:38 compute-0 sudo[47614]: pam_unix(sudo:session): session closed for user root
Dec 11 09:07:39 compute-0 python3.9[47771]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 09:07:40 compute-0 sudo[47927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-murpdxfbqdbcospjrvokeyzjezjlxwrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444060.1693459-320-273961655340846/AnsiballZ_dnf.py'
Dec 11 09:07:40 compute-0 dbus-broker-launch[780]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Dec 11 09:07:40 compute-0 sudo[47927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:07:40 compute-0 python3.9[47929]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 11 09:07:42 compute-0 sudo[47927]: pam_unix(sudo:session): session closed for user root
Dec 11 09:07:42 compute-0 sudo[48080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjuikwjzwhtynoigagtglerycfxipyfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444062.4294863-344-86999709252455/AnsiballZ_command.py'
Dec 11 09:07:42 compute-0 sudo[48080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:07:43 compute-0 python3.9[48082]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:07:43 compute-0 sudo[48080]: pam_unix(sudo:session): session closed for user root
Dec 11 09:07:44 compute-0 sudo[48367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojebyhpiwkkuzefkotifyvtktlcbzevm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444063.9692736-368-32057383798994/AnsiballZ_file.py'
Dec 11 09:07:44 compute-0 sudo[48367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:07:44 compute-0 python3.9[48369]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 11 09:07:44 compute-0 sudo[48367]: pam_unix(sudo:session): session closed for user root
Dec 11 09:07:45 compute-0 python3.9[48519]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 11 09:07:45 compute-0 sudo[48671]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjuoogluktnzcimxskztcyewjwnuspxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444065.5367153-416-224953585204293/AnsiballZ_dnf.py'
Dec 11 09:07:45 compute-0 sudo[48671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:07:46 compute-0 python3.9[48673]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 11 09:07:48 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 11 09:07:48 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 11 09:07:48 compute-0 systemd[1]: Reloading.
Dec 11 09:07:48 compute-0 systemd-sysv-generator[48715]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:07:48 compute-0 systemd-rc-local-generator[48712]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:07:48 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 11 09:07:48 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 11 09:07:48 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 11 09:07:48 compute-0 systemd[1]: run-r19799d9866104222b9e1f3fd250bad88.service: Deactivated successfully.
Dec 11 09:07:48 compute-0 sudo[48671]: pam_unix(sudo:session): session closed for user root
Dec 11 09:07:50 compute-0 sudo[48988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akaxxqsinzigwhhlgedpakgscbiwqokg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444070.0432699-440-212499855584157/AnsiballZ_systemd.py'
Dec 11 09:07:50 compute-0 sudo[48988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:07:50 compute-0 python3.9[48990]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 11 09:07:50 compute-0 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec 11 09:07:50 compute-0 systemd[1]: Stopped Network Manager Wait Online.
Dec 11 09:07:50 compute-0 systemd[1]: Stopping Network Manager Wait Online...
Dec 11 09:07:50 compute-0 systemd[1]: Stopping Network Manager...
Dec 11 09:07:50 compute-0 NetworkManager[7188]: <info>  [1765444070.6761] caught SIGTERM, shutting down normally.
Dec 11 09:07:50 compute-0 NetworkManager[7188]: <info>  [1765444070.6785] dhcp4 (eth0): canceled DHCP transaction
Dec 11 09:07:50 compute-0 NetworkManager[7188]: <info>  [1765444070.6786] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 11 09:07:50 compute-0 NetworkManager[7188]: <info>  [1765444070.6786] dhcp4 (eth0): state changed no lease
Dec 11 09:07:50 compute-0 NetworkManager[7188]: <info>  [1765444070.6789] manager: NetworkManager state is now CONNECTED_SITE
Dec 11 09:07:50 compute-0 NetworkManager[7188]: <info>  [1765444070.6865] exiting (success)
Dec 11 09:07:50 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 11 09:07:50 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 11 09:07:50 compute-0 systemd[1]: NetworkManager.service: Deactivated successfully.
Dec 11 09:07:50 compute-0 systemd[1]: Stopped Network Manager.
Dec 11 09:07:50 compute-0 systemd[1]: NetworkManager.service: Consumed 16.452s CPU time, 4.3M memory peak, read 0B from disk, written 27.5K to disk.
Dec 11 09:07:50 compute-0 systemd[1]: Starting Network Manager...
Dec 11 09:07:50 compute-0 NetworkManager[49003]: <info>  [1765444070.7590] NetworkManager (version 1.54.2-1.el9) is starting... (after a restart, boot:13dd1f60-0a56-492c-a25c-280d72789ed1)
Dec 11 09:07:50 compute-0 NetworkManager[49003]: <info>  [1765444070.7591] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec 11 09:07:50 compute-0 NetworkManager[49003]: <info>  [1765444070.7657] manager[0x5585810b2000]: monitoring kernel firmware directory '/lib/firmware'.
Dec 11 09:07:50 compute-0 systemd[1]: Starting Hostname Service...
Dec 11 09:07:50 compute-0 systemd[1]: Started Hostname Service.
Dec 11 09:07:50 compute-0 NetworkManager[49003]: <info>  [1765444070.8432] hostname: hostname: using hostnamed
Dec 11 09:07:50 compute-0 NetworkManager[49003]: <info>  [1765444070.8434] hostname: static hostname changed from (none) to "compute-0"
Dec 11 09:07:50 compute-0 NetworkManager[49003]: <info>  [1765444070.8441] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec 11 09:07:50 compute-0 NetworkManager[49003]: <info>  [1765444070.8448] manager[0x5585810b2000]: rfkill: Wi-Fi hardware radio set enabled
Dec 11 09:07:50 compute-0 NetworkManager[49003]: <info>  [1765444070.8449] manager[0x5585810b2000]: rfkill: WWAN hardware radio set enabled
Dec 11 09:07:50 compute-0 NetworkManager[49003]: <info>  [1765444070.8471] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-device-plugin-ovs.so)
Dec 11 09:07:50 compute-0 NetworkManager[49003]: <info>  [1765444070.8482] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-device-plugin-team.so)
Dec 11 09:07:50 compute-0 NetworkManager[49003]: <info>  [1765444070.8483] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec 11 09:07:50 compute-0 NetworkManager[49003]: <info>  [1765444070.8484] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec 11 09:07:50 compute-0 NetworkManager[49003]: <info>  [1765444070.8484] manager: Networking is enabled by state file
Dec 11 09:07:50 compute-0 NetworkManager[49003]: <info>  [1765444070.8487] settings: Loaded settings plugin: keyfile (internal)
Dec 11 09:07:50 compute-0 NetworkManager[49003]: <info>  [1765444070.8492] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec 11 09:07:50 compute-0 NetworkManager[49003]: <info>  [1765444070.8528] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec 11 09:07:50 compute-0 NetworkManager[49003]: <info>  [1765444070.8539] dhcp: init: Using DHCP client 'internal'
Dec 11 09:07:50 compute-0 NetworkManager[49003]: <info>  [1765444070.8542] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec 11 09:07:50 compute-0 NetworkManager[49003]: <info>  [1765444070.8548] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 11 09:07:50 compute-0 NetworkManager[49003]: <info>  [1765444070.8555] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec 11 09:07:50 compute-0 NetworkManager[49003]: <info>  [1765444070.8563] device (lo): Activation: starting connection 'lo' (cc03eaf0-9208-4de5-bc26-72936417c77f)
Dec 11 09:07:50 compute-0 NetworkManager[49003]: <info>  [1765444070.8570] device (eth0): carrier: link connected
Dec 11 09:07:50 compute-0 NetworkManager[49003]: <info>  [1765444070.8573] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec 11 09:07:50 compute-0 NetworkManager[49003]: <info>  [1765444070.8579] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec 11 09:07:50 compute-0 NetworkManager[49003]: <info>  [1765444070.8579] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 11 09:07:50 compute-0 NetworkManager[49003]: <info>  [1765444070.8590] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 11 09:07:50 compute-0 NetworkManager[49003]: <info>  [1765444070.8598] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 11 09:07:50 compute-0 NetworkManager[49003]: <info>  [1765444070.8605] device (eth1): carrier: link connected
Dec 11 09:07:50 compute-0 NetworkManager[49003]: <info>  [1765444070.8609] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec 11 09:07:50 compute-0 NetworkManager[49003]: <info>  [1765444070.8616] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (ba5fe1b2-21e2-562b-838b-8e9d781a1880) (indicated)
Dec 11 09:07:50 compute-0 NetworkManager[49003]: <info>  [1765444070.8617] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 11 09:07:50 compute-0 NetworkManager[49003]: <info>  [1765444070.8623] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 11 09:07:50 compute-0 NetworkManager[49003]: <info>  [1765444070.8631] device (eth1): Activation: starting connection 'ci-private-network' (ba5fe1b2-21e2-562b-838b-8e9d781a1880)
Dec 11 09:07:50 compute-0 systemd[1]: Started Network Manager.
Dec 11 09:07:50 compute-0 NetworkManager[49003]: <info>  [1765444070.8637] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec 11 09:07:50 compute-0 NetworkManager[49003]: <info>  [1765444070.8669] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec 11 09:07:50 compute-0 NetworkManager[49003]: <info>  [1765444070.8673] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec 11 09:07:50 compute-0 NetworkManager[49003]: <info>  [1765444070.8678] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec 11 09:07:50 compute-0 NetworkManager[49003]: <info>  [1765444070.8685] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 11 09:07:50 compute-0 NetworkManager[49003]: <info>  [1765444070.8689] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 11 09:07:50 compute-0 NetworkManager[49003]: <info>  [1765444070.8693] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 11 09:07:50 compute-0 NetworkManager[49003]: <info>  [1765444070.8696] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 11 09:07:50 compute-0 NetworkManager[49003]: <info>  [1765444070.8699] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec 11 09:07:50 compute-0 NetworkManager[49003]: <info>  [1765444070.8708] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 11 09:07:50 compute-0 NetworkManager[49003]: <info>  [1765444070.8712] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 11 09:07:50 compute-0 NetworkManager[49003]: <info>  [1765444070.8724] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 11 09:07:50 compute-0 NetworkManager[49003]: <info>  [1765444070.8741] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 11 09:07:50 compute-0 NetworkManager[49003]: <info>  [1765444070.8754] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec 11 09:07:50 compute-0 NetworkManager[49003]: <info>  [1765444070.8757] dhcp4 (eth0): state changed new lease, address=38.102.83.223
Dec 11 09:07:50 compute-0 NetworkManager[49003]: <info>  [1765444070.8761] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec 11 09:07:50 compute-0 NetworkManager[49003]: <info>  [1765444070.8769] device (lo): Activation: successful, device activated.
Dec 11 09:07:50 compute-0 NetworkManager[49003]: <info>  [1765444070.8783] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec 11 09:07:50 compute-0 systemd[1]: Starting Network Manager Wait Online...
Dec 11 09:07:50 compute-0 sudo[48988]: pam_unix(sudo:session): session closed for user root
Dec 11 09:07:51 compute-0 NetworkManager[49003]: <info>  [1765444071.2130] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 11 09:07:51 compute-0 NetworkManager[49003]: <info>  [1765444071.2147] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 11 09:07:51 compute-0 NetworkManager[49003]: <info>  [1765444071.2153] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 11 09:07:51 compute-0 NetworkManager[49003]: <info>  [1765444071.2157] manager: NetworkManager state is now CONNECTED_LOCAL
Dec 11 09:07:51 compute-0 NetworkManager[49003]: <info>  [1765444071.2159] device (eth1): Activation: successful, device activated.
Dec 11 09:07:51 compute-0 NetworkManager[49003]: <info>  [1765444071.2481] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 11 09:07:51 compute-0 NetworkManager[49003]: <info>  [1765444071.2484] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 11 09:07:51 compute-0 NetworkManager[49003]: <info>  [1765444071.2486] manager: NetworkManager state is now CONNECTED_SITE
Dec 11 09:07:51 compute-0 NetworkManager[49003]: <info>  [1765444071.2488] device (eth0): Activation: successful, device activated.
Dec 11 09:07:51 compute-0 NetworkManager[49003]: <info>  [1765444071.2493] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec 11 09:07:51 compute-0 NetworkManager[49003]: <info>  [1765444071.2526] manager: startup complete
Dec 11 09:07:51 compute-0 systemd[1]: Finished Network Manager Wait Online.
Dec 11 09:07:51 compute-0 sudo[49214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmehclxedgjtirkwklcmglsqimthgmkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444071.0606563-464-97944148533766/AnsiballZ_dnf.py'
Dec 11 09:07:51 compute-0 sudo[49214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:07:51 compute-0 python3.9[49216]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 11 09:07:57 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 11 09:07:57 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 11 09:07:57 compute-0 systemd[1]: Reloading.
Dec 11 09:07:57 compute-0 systemd-rc-local-generator[49265]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:07:57 compute-0 systemd-sysv-generator[49271]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:07:58 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 11 09:07:58 compute-0 sudo[49214]: pam_unix(sudo:session): session closed for user root
Dec 11 09:07:58 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 11 09:07:58 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 11 09:07:58 compute-0 systemd[1]: run-r74cb2a7a49114bc294724efe2d24fb18.service: Deactivated successfully.
Dec 11 09:07:59 compute-0 sudo[49673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpgeurdygodmgsgcqypcsncgxktdddtg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444079.3145285-500-263459972916317/AnsiballZ_stat.py'
Dec 11 09:07:59 compute-0 sudo[49673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:07:59 compute-0 python3.9[49675]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 11 09:07:59 compute-0 sudo[49673]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:00 compute-0 sudo[49825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sixzxuapogojviwnudijemjzaicyolif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444080.016135-527-27453612519748/AnsiballZ_ini_file.py'
Dec 11 09:08:00 compute-0 sudo[49825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:00 compute-0 python3.9[49827]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:08:00 compute-0 sudo[49825]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:01 compute-0 sudo[49979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zchkncetwzytjzymsznsdnkaddpigekp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444080.9920378-557-15457898260551/AnsiballZ_ini_file.py'
Dec 11 09:08:01 compute-0 sudo[49979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:01 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 11 09:08:01 compute-0 python3.9[49981]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:08:01 compute-0 sudo[49979]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:01 compute-0 sudo[50131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjkzbfuswfootvsexkqsofgmhazbqkjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444081.6363213-557-281334589901527/AnsiballZ_ini_file.py'
Dec 11 09:08:01 compute-0 sudo[50131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:02 compute-0 python3.9[50133]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:08:02 compute-0 sudo[50131]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:02 compute-0 sudo[50283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlmxeqgbbcbtkuoezwlhyscitsnnbvho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444082.3221834-602-206581424240367/AnsiballZ_ini_file.py'
Dec 11 09:08:02 compute-0 sudo[50283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:02 compute-0 python3.9[50285]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:08:02 compute-0 sudo[50283]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:03 compute-0 sudo[50435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebnzjmwjugxicqfpwdminlexqxusazdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444082.9494712-602-247359325988057/AnsiballZ_ini_file.py'
Dec 11 09:08:03 compute-0 sudo[50435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:03 compute-0 python3.9[50437]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:08:03 compute-0 sudo[50435]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:04 compute-0 sudo[50587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrgzyrdygkehlfqvivvdqxodvbhlflot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444084.0029628-647-147205960811154/AnsiballZ_stat.py'
Dec 11 09:08:04 compute-0 sudo[50587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:04 compute-0 python3.9[50589]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 09:08:04 compute-0 sudo[50587]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:04 compute-0 sudo[50710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wczhzrwrkludgspmkcywxiixhuxcbezh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444084.0029628-647-147205960811154/AnsiballZ_copy.py'
Dec 11 09:08:04 compute-0 sudo[50710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:05 compute-0 python3.9[50712]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1765444084.0029628-647-147205960811154/.source _original_basename=.d6f6h2yk follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:08:05 compute-0 sudo[50710]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:05 compute-0 sudo[50862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqzzawqvvrphmxxjgpsoflvmvblzvpzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444085.384171-692-240550158469920/AnsiballZ_file.py'
Dec 11 09:08:05 compute-0 sudo[50862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:05 compute-0 python3.9[50864]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:08:05 compute-0 sudo[50862]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:06 compute-0 sudo[51014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqnmqkjnpvuodwwfqbqfoyyxafttvvur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444086.036601-716-162567864034640/AnsiballZ_edpm_os_net_config_mappings.py'
Dec 11 09:08:06 compute-0 sudo[51014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:06 compute-0 python3.9[51016]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Dec 11 09:08:06 compute-0 sudo[51014]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:07 compute-0 sudo[51166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxkgibogvrvlkplhcihrfepkldnklnou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444086.9031596-743-150241796412720/AnsiballZ_file.py'
Dec 11 09:08:07 compute-0 sudo[51166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:07 compute-0 python3.9[51168]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:08:07 compute-0 sudo[51166]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:08 compute-0 sudo[51318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idsofzkdnkftmobojbvzmwuglzilnanm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444087.7815907-773-114668242601447/AnsiballZ_stat.py'
Dec 11 09:08:08 compute-0 sudo[51318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:08 compute-0 sudo[51318]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:08 compute-0 sudo[51441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwfatxdpghccgwemankibktqrpetmyjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444087.7815907-773-114668242601447/AnsiballZ_copy.py'
Dec 11 09:08:08 compute-0 sudo[51441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:08 compute-0 sudo[51441]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:09 compute-0 sudo[51593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxnvnukkdumpwuzzebtabgcgvmhnddby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444089.1572702-818-68315995547730/AnsiballZ_slurp.py'
Dec 11 09:08:09 compute-0 sudo[51593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:09 compute-0 python3.9[51595]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Dec 11 09:08:09 compute-0 sudo[51593]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:10 compute-0 sudo[51768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulsmunvzrvodqccyairklyjhkmqjocxb ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444090.0675058-845-132191942517268/async_wrapper.py j700098005729 300 /home/zuul/.ansible/tmp/ansible-tmp-1765444090.0675058-845-132191942517268/AnsiballZ_edpm_os_net_config.py _'
Dec 11 09:08:10 compute-0 sudo[51768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:10 compute-0 ansible-async_wrapper.py[51770]: Invoked with j700098005729 300 /home/zuul/.ansible/tmp/ansible-tmp-1765444090.0675058-845-132191942517268/AnsiballZ_edpm_os_net_config.py _
Dec 11 09:08:10 compute-0 ansible-async_wrapper.py[51773]: Starting module and watcher
Dec 11 09:08:10 compute-0 ansible-async_wrapper.py[51773]: Start watching 51774 (300)
Dec 11 09:08:10 compute-0 ansible-async_wrapper.py[51774]: Start module (51774)
Dec 11 09:08:10 compute-0 ansible-async_wrapper.py[51770]: Return async_wrapper task started.
Dec 11 09:08:10 compute-0 sudo[51768]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:11 compute-0 python3.9[51775]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Dec 11 09:08:11 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Dec 11 09:08:11 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Dec 11 09:08:11 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Dec 11 09:08:11 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Dec 11 09:08:11 compute-0 kernel: cfg80211: failed to load regulatory.db
Dec 11 09:08:12 compute-0 NetworkManager[49003]: <info>  [1765444092.9355] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51776 uid=0 result="success"
Dec 11 09:08:12 compute-0 NetworkManager[49003]: <info>  [1765444092.9372] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51776 uid=0 result="success"
Dec 11 09:08:12 compute-0 NetworkManager[49003]: <info>  [1765444092.9887] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Dec 11 09:08:12 compute-0 NetworkManager[49003]: <info>  [1765444092.9893] audit: op="connection-add" uuid="1648d5b2-4ca3-4061-9ad0-e88bd77cd733" name="br-ex-br" pid=51776 uid=0 result="success"
Dec 11 09:08:12 compute-0 NetworkManager[49003]: <info>  [1765444092.9906] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Dec 11 09:08:12 compute-0 NetworkManager[49003]: <info>  [1765444092.9908] audit: op="connection-add" uuid="49cc098e-8d38-4c3b-8503-edda2976f858" name="br-ex-port" pid=51776 uid=0 result="success"
Dec 11 09:08:12 compute-0 NetworkManager[49003]: <info>  [1765444092.9921] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Dec 11 09:08:12 compute-0 NetworkManager[49003]: <info>  [1765444092.9924] audit: op="connection-add" uuid="fba8b78e-8569-402e-821e-d1a290ca40bf" name="eth1-port" pid=51776 uid=0 result="success"
Dec 11 09:08:12 compute-0 NetworkManager[49003]: <info>  [1765444092.9936] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Dec 11 09:08:12 compute-0 NetworkManager[49003]: <info>  [1765444092.9938] audit: op="connection-add" uuid="41888e6a-bdaf-4e58-8e71-63be45580412" name="vlan20-port" pid=51776 uid=0 result="success"
Dec 11 09:08:12 compute-0 NetworkManager[49003]: <info>  [1765444092.9951] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Dec 11 09:08:12 compute-0 NetworkManager[49003]: <info>  [1765444092.9954] audit: op="connection-add" uuid="8d4f8bf0-8772-4b62-b59b-0b8fb6bbf54c" name="vlan21-port" pid=51776 uid=0 result="success"
Dec 11 09:08:12 compute-0 NetworkManager[49003]: <info>  [1765444092.9967] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Dec 11 09:08:12 compute-0 NetworkManager[49003]: <info>  [1765444092.9969] audit: op="connection-add" uuid="0a735faa-c6f3-4c6d-ba2b-a5cc4c39ebe7" name="vlan22-port" pid=51776 uid=0 result="success"
Dec 11 09:08:12 compute-0 NetworkManager[49003]: <info>  [1765444092.9981] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Dec 11 09:08:12 compute-0 NetworkManager[49003]: <info>  [1765444092.9984] audit: op="connection-add" uuid="095fad24-197c-48e0-9515-2a6a31cc3d7b" name="vlan23-port" pid=51776 uid=0 result="success"
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0008] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="connection.timestamp,connection.autoconnect-priority,802-3-ethernet.mtu,ipv4.dhcp-client-id,ipv4.dhcp-timeout,ipv6.dhcp-timeout,ipv6.method,ipv6.addr-gen-mode" pid=51776 uid=0 result="success"
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0026] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0028] audit: op="connection-add" uuid="6c9ba335-1822-4f38-8372-5ade863cba2e" name="br-ex-if" pid=51776 uid=0 result="success"
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0090] audit: op="connection-update" uuid="ba5fe1b2-21e2-562b-838b-8e9d781a1880" name="ci-private-network" args="connection.slave-type,connection.timestamp,connection.controller,connection.port-type,connection.master,ipv4.routing-rules,ipv4.dns,ipv4.never-default,ipv4.routes,ipv4.method,ipv4.addresses,ovs-interface.type,ipv6.dns,ipv6.routing-rules,ipv6.routes,ipv6.method,ipv6.addr-gen-mode,ipv6.addresses,ovs-external-ids.data" pid=51776 uid=0 result="success"
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0109] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0112] audit: op="connection-add" uuid="567fb289-322f-44bf-9bf5-be4fd9c6eb47" name="vlan20-if" pid=51776 uid=0 result="success"
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0130] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0132] audit: op="connection-add" uuid="66316c35-1894-47c5-af5f-0b668d56d853" name="vlan21-if" pid=51776 uid=0 result="success"
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0153] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0155] audit: op="connection-add" uuid="507dc938-8729-47f9-bfad-ce8ab91f44ae" name="vlan22-if" pid=51776 uid=0 result="success"
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0170] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0173] audit: op="connection-add" uuid="385df5c0-3156-4d39-9156-48397f8ea288" name="vlan23-if" pid=51776 uid=0 result="success"
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0185] audit: op="connection-delete" uuid="5a3b4b8d-0dcd-398c-b979-a7322f259c3a" name="Wired connection 1" pid=51776 uid=0 result="success"
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0200] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <warn>  [1765444093.0205] device (br-ex)[Open vSwitch Bridge]: error setting IPv4 forwarding to '1': Success
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0213] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0219] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (1648d5b2-4ca3-4061-9ad0-e88bd77cd733)
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0220] audit: op="connection-activate" uuid="1648d5b2-4ca3-4061-9ad0-e88bd77cd733" name="br-ex-br" pid=51776 uid=0 result="success"
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0221] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <warn>  [1765444093.0223] device (br-ex)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Success
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0230] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0235] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (49cc098e-8d38-4c3b-8503-edda2976f858)
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0237] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <warn>  [1765444093.0239] device (eth1)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0244] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0250] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (fba8b78e-8569-402e-821e-d1a290ca40bf)
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0252] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <warn>  [1765444093.0253] device (vlan20)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0258] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0264] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (41888e6a-bdaf-4e58-8e71-63be45580412)
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0265] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <warn>  [1765444093.0267] device (vlan21)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0272] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0277] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (8d4f8bf0-8772-4b62-b59b-0b8fb6bbf54c)
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0279] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <warn>  [1765444093.0281] device (vlan22)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0286] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0292] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (0a735faa-c6f3-4c6d-ba2b-a5cc4c39ebe7)
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0294] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <warn>  [1765444093.0296] device (vlan23)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0302] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0307] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (095fad24-197c-48e0-9515-2a6a31cc3d7b)
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0308] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0311] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0314] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0321] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <warn>  [1765444093.0323] device (br-ex)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0326] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0331] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (6c9ba335-1822-4f38-8372-5ade863cba2e)
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0333] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0338] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0340] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0341] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0343] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0360] device (eth1): disconnecting for new activation request.
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0361] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0364] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0366] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0367] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0373] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <warn>  [1765444093.0375] device (vlan20)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0379] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0384] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (567fb289-322f-44bf-9bf5-be4fd9c6eb47)
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0388] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0390] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0392] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0392] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0395] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <warn>  [1765444093.0396] device (vlan21)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0398] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0401] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (66316c35-1894-47c5-af5f-0b668d56d853)
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0402] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0404] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0405] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0406] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0407] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <warn>  [1765444093.0408] device (vlan22)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0410] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0413] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (507dc938-8729-47f9-bfad-ce8ab91f44ae)
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0414] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0416] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0417] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0418] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0419] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <warn>  [1765444093.0420] device (vlan23)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0422] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0425] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (385df5c0-3156-4d39-9156-48397f8ea288)
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0425] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0427] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0428] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0429] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0430] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0440] audit: op="device-reapply" interface="eth0" ifindex=2 args="connection.autoconnect-priority,802-3-ethernet.mtu,ipv4.dhcp-client-id,ipv4.dhcp-timeout,ipv6.method,ipv6.addr-gen-mode" pid=51776 uid=0 result="success"
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0442] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0444] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0445] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0451] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0453] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0456] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0458] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0459] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0462] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0465] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0467] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0468] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0472] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0474] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0477] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0478] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0482] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0484] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0487] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0488] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 kernel: ovs-system: entered promiscuous mode
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0492] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0495] dhcp4 (eth0): canceled DHCP transaction
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0496] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0496] dhcp4 (eth0): state changed no lease
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0498] dhcp4 (eth0): activation: beginning transaction (no timeout)
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0513] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0517] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51776 uid=0 result="fail" reason="Device is not activated"
Dec 11 09:08:13 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 11 09:08:13 compute-0 kernel: Timeout policy base is empty
Dec 11 09:08:13 compute-0 systemd-udevd[51782]: Network interface NamePolicy= disabled on kernel command line.
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0552] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0560] dhcp4 (eth0): state changed new lease, address=38.102.83.223
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0601] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0610] device (eth1): disconnecting for new activation request.
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0611] audit: op="connection-activate" uuid="ba5fe1b2-21e2-562b-838b-8e9d781a1880" name="ci-private-network" pid=51776 uid=0 result="success"
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0612] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0618] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0634] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51776 uid=0 result="success"
Dec 11 09:08:13 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0707] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0816] device (eth1): Activation: starting connection 'ci-private-network' (ba5fe1b2-21e2-562b-838b-8e9d781a1880)
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0820] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0823] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0827] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0828] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0831] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0833] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0835] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0844] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0850] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0858] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0863] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0867] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0874] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0879] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0886] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0890] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0895] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0898] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0904] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0909] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0914] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0918] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0924] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0946] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0964] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.0971] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.1014] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.1021] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.1031] device (eth1): Activation: successful, device activated.
Dec 11 09:08:13 compute-0 kernel: br-ex: entered promiscuous mode
Dec 11 09:08:13 compute-0 kernel: vlan22: entered promiscuous mode
Dec 11 09:08:13 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Dec 11 09:08:13 compute-0 systemd-udevd[51780]: Network interface NamePolicy= disabled on kernel command line.
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.1259] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Dec 11 09:08:13 compute-0 kernel: vlan23: entered promiscuous mode
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.1298] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.1319] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.1330] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.1337] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.1339] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.1343] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.1350] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.1353] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.1358] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 11 09:08:13 compute-0 kernel: vlan21: entered promiscuous mode
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.1405] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.1423] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 kernel: vlan20: entered promiscuous mode
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.1466] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.1467] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.1471] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.1526] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.1539] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.1557] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.1567] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.1797] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.1798] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.1804] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.1808] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.1816] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-0 NetworkManager[49003]: <info>  [1765444093.1821] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 11 09:08:14 compute-0 NetworkManager[49003]: <info>  [1765444094.2727] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51776 uid=0 result="success"
Dec 11 09:08:14 compute-0 NetworkManager[49003]: <info>  [1765444094.4562] checkpoint[0x558581087950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Dec 11 09:08:14 compute-0 NetworkManager[49003]: <info>  [1765444094.4565] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51776 uid=0 result="success"
Dec 11 09:08:14 compute-0 sudo[52132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqncwkzypjorpuxzwulsdczpqwcgybvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444094.0530996-845-243344798269737/AnsiballZ_async_status.py'
Dec 11 09:08:14 compute-0 sudo[52132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:14 compute-0 python3.9[52134]: ansible-ansible.legacy.async_status Invoked with jid=j700098005729.51770 mode=status _async_dir=/root/.ansible_async
Dec 11 09:08:14 compute-0 sudo[52132]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:14 compute-0 NetworkManager[49003]: <info>  [1765444094.7647] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51776 uid=0 result="success"
Dec 11 09:08:14 compute-0 NetworkManager[49003]: <info>  [1765444094.7664] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51776 uid=0 result="success"
Dec 11 09:08:15 compute-0 NetworkManager[49003]: <info>  [1765444095.0119] audit: op="networking-control" arg="global-dns-configuration" pid=51776 uid=0 result="success"
Dec 11 09:08:15 compute-0 NetworkManager[49003]: <info>  [1765444095.0152] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Dec 11 09:08:15 compute-0 NetworkManager[49003]: <info>  [1765444095.0185] audit: op="networking-control" arg="global-dns-configuration" pid=51776 uid=0 result="success"
Dec 11 09:08:15 compute-0 NetworkManager[49003]: <info>  [1765444095.0212] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51776 uid=0 result="success"
Dec 11 09:08:15 compute-0 NetworkManager[49003]: <info>  [1765444095.1706] checkpoint[0x558581087a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Dec 11 09:08:15 compute-0 NetworkManager[49003]: <info>  [1765444095.1710] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51776 uid=0 result="success"
Dec 11 09:08:15 compute-0 ansible-async_wrapper.py[51774]: Module complete (51774)
Dec 11 09:08:15 compute-0 ansible-async_wrapper.py[51773]: Done in kid B.
Dec 11 09:08:17 compute-0 sudo[52237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypontmxopvzsptrjscwgfktdbrezcnsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444094.0530996-845-243344798269737/AnsiballZ_async_status.py'
Dec 11 09:08:17 compute-0 sudo[52237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:18 compute-0 python3.9[52239]: ansible-ansible.legacy.async_status Invoked with jid=j700098005729.51770 mode=status _async_dir=/root/.ansible_async
Dec 11 09:08:18 compute-0 sudo[52237]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:18 compute-0 sudo[52337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmozbtdycffmmycfsrmwgyvoctgzyrvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444094.0530996-845-243344798269737/AnsiballZ_async_status.py'
Dec 11 09:08:18 compute-0 sudo[52337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:18 compute-0 python3.9[52339]: ansible-ansible.legacy.async_status Invoked with jid=j700098005729.51770 mode=cleanup _async_dir=/root/.ansible_async
Dec 11 09:08:18 compute-0 sudo[52337]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:19 compute-0 sudo[52489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-daadsdszjjbzruuldhkxucwyhbjihjei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444098.8677282-926-271678203288871/AnsiballZ_stat.py'
Dec 11 09:08:19 compute-0 sudo[52489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:19 compute-0 python3.9[52491]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 09:08:19 compute-0 sudo[52489]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:19 compute-0 sudo[52612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsbbxzfpdubyxwrbpyylibycuogtwlhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444098.8677282-926-271678203288871/AnsiballZ_copy.py'
Dec 11 09:08:19 compute-0 sudo[52612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:19 compute-0 python3.9[52614]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765444098.8677282-926-271678203288871/.source.returncode _original_basename=.xsrb2k4o follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:08:19 compute-0 sudo[52612]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:20 compute-0 sudo[52764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsrmmeibtazbtyfbajpfirliipvphbnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444100.272809-974-232182290773010/AnsiballZ_stat.py'
Dec 11 09:08:20 compute-0 sudo[52764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:20 compute-0 python3.9[52766]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 09:08:20 compute-0 sudo[52764]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:20 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 11 09:08:21 compute-0 sudo[52889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwnzpiwloeicsfyerqtlhecxcengthic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444100.272809-974-232182290773010/AnsiballZ_copy.py'
Dec 11 09:08:21 compute-0 sudo[52889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:21 compute-0 python3.9[52891]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765444100.272809-974-232182290773010/.source.cfg _original_basename=.1a222853 follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:08:21 compute-0 sudo[52889]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:21 compute-0 sudo[53042]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uavoedgdmjuybfejntxripscocbzntrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444101.5213175-1019-116844978264148/AnsiballZ_systemd.py'
Dec 11 09:08:21 compute-0 sudo[53042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:22 compute-0 python3.9[53044]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 11 09:08:22 compute-0 systemd[1]: Reloading Network Manager...
Dec 11 09:08:22 compute-0 NetworkManager[49003]: <info>  [1765444102.2272] audit: op="reload" arg="0" pid=53048 uid=0 result="success"
Dec 11 09:08:22 compute-0 NetworkManager[49003]: <info>  [1765444102.2281] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Dec 11 09:08:22 compute-0 systemd[1]: Reloaded Network Manager.
Dec 11 09:08:22 compute-0 sudo[53042]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:22 compute-0 sshd-session[45003]: Connection closed by 192.168.122.30 port 40244
Dec 11 09:08:22 compute-0 sshd-session[45000]: pam_unix(sshd:session): session closed for user zuul
Dec 11 09:08:22 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Dec 11 09:08:22 compute-0 systemd[1]: session-10.scope: Consumed 54.856s CPU time.
Dec 11 09:08:22 compute-0 systemd-logind[792]: Session 10 logged out. Waiting for processes to exit.
Dec 11 09:08:22 compute-0 systemd-logind[792]: Removed session 10.
Dec 11 09:08:28 compute-0 sshd-session[53079]: Accepted publickey for zuul from 192.168.122.30 port 59534 ssh2: ECDSA SHA256:cT48FffNzE3FSGRebchaTzw3hIqcCIoBfXY30Q2C9bc
Dec 11 09:08:28 compute-0 systemd-logind[792]: New session 11 of user zuul.
Dec 11 09:08:28 compute-0 systemd[1]: Started Session 11 of User zuul.
Dec 11 09:08:28 compute-0 sshd-session[53079]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 11 09:08:29 compute-0 python3.9[53232]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 09:08:30 compute-0 python3.9[53386]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 11 09:08:31 compute-0 python3.9[53580]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:08:32 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 11 09:08:32 compute-0 sshd-session[53082]: Connection closed by 192.168.122.30 port 59534
Dec 11 09:08:32 compute-0 sshd-session[53079]: pam_unix(sshd:session): session closed for user zuul
Dec 11 09:08:32 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Dec 11 09:08:32 compute-0 systemd[1]: session-11.scope: Consumed 2.348s CPU time.
Dec 11 09:08:32 compute-0 systemd-logind[792]: Session 11 logged out. Waiting for processes to exit.
Dec 11 09:08:32 compute-0 systemd-logind[792]: Removed session 11.
Dec 11 09:08:38 compute-0 sshd-session[53610]: Accepted publickey for zuul from 192.168.122.30 port 48242 ssh2: ECDSA SHA256:cT48FffNzE3FSGRebchaTzw3hIqcCIoBfXY30Q2C9bc
Dec 11 09:08:38 compute-0 systemd-logind[792]: New session 12 of user zuul.
Dec 11 09:08:38 compute-0 systemd[1]: Started Session 12 of User zuul.
Dec 11 09:08:38 compute-0 sshd-session[53610]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 11 09:08:39 compute-0 python3.9[53763]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 09:08:40 compute-0 python3.9[53917]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 09:08:41 compute-0 sudo[54072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgojwiuadmxcxjkzqoxqaubscccsxunf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444120.9128282-80-177188399966141/AnsiballZ_setup.py'
Dec 11 09:08:41 compute-0 sudo[54072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:41 compute-0 python3.9[54074]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 11 09:08:41 compute-0 sudo[54072]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:42 compute-0 sudo[54156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhufxvgqqzyjngenjddvqajwryufouqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444120.9128282-80-177188399966141/AnsiballZ_dnf.py'
Dec 11 09:08:42 compute-0 sudo[54156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:42 compute-0 python3.9[54158]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 11 09:08:44 compute-0 sudo[54156]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:44 compute-0 sudo[54310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-weidelgdmojrmcvopfdtxsptiioyacmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444124.2720146-116-122178696300399/AnsiballZ_setup.py'
Dec 11 09:08:44 compute-0 sudo[54310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:44 compute-0 python3.9[54312]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 11 09:08:45 compute-0 sudo[54310]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:45 compute-0 sudo[54505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcwmpqbfkfumfeaowovtkcsxxlkqdfly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444125.5023265-149-224282421760311/AnsiballZ_file.py'
Dec 11 09:08:45 compute-0 sudo[54505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:46 compute-0 python3.9[54507]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:08:46 compute-0 sudo[54505]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:46 compute-0 sudo[54657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owgjsuxneyqvjvdaghjewhikhgfaugmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444126.3074238-173-137590210674113/AnsiballZ_command.py'
Dec 11 09:08:46 compute-0 sudo[54657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:46 compute-0 python3.9[54659]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:08:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat1583032228-merged.mount: Deactivated successfully.
Dec 11 09:08:47 compute-0 podman[54660]: 2025-12-11 09:08:47.107418128 +0000 UTC m=+0.103636299 system refresh
Dec 11 09:08:47 compute-0 sudo[54657]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:47 compute-0 sudo[54820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whkgclwwyrvvajxlykvbiukvijgrizzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444127.4069684-197-273328904938692/AnsiballZ_stat.py'
Dec 11 09:08:47 compute-0 sudo[54820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:48 compute-0 python3.9[54822]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 09:08:48 compute-0 sudo[54820]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:48 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 11 09:08:48 compute-0 sudo[54943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wprdebkbqiggcrphrmvrvehvrpfqjyzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444127.4069684-197-273328904938692/AnsiballZ_copy.py'
Dec 11 09:08:48 compute-0 sudo[54943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:48 compute-0 python3.9[54945]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765444127.4069684-197-273328904938692/.source.json follow=False _original_basename=podman_network_config.j2 checksum=a30fa3c1ec0b27b68ea6e5c57f19447b770b0413 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:08:48 compute-0 sudo[54943]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:49 compute-0 sudo[55095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcubrqbaqedfooscrrlqjvyrcgnileib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444128.9337847-242-100781375758684/AnsiballZ_stat.py'
Dec 11 09:08:49 compute-0 sudo[55095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:49 compute-0 python3.9[55097]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 09:08:49 compute-0 sudo[55095]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:49 compute-0 sudo[55218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxsdbbjebkwwnqirkburtieadskkhfvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444128.9337847-242-100781375758684/AnsiballZ_copy.py'
Dec 11 09:08:49 compute-0 sudo[55218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:50 compute-0 python3.9[55220]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765444128.9337847-242-100781375758684/.source.conf follow=False _original_basename=registries.conf.j2 checksum=aa15b84f7f4c5c1e005ae51043980351730af2c4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 11 09:08:50 compute-0 sudo[55218]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:50 compute-0 sudo[55370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wacteeuztddgkwvopvzznpdvqojvbelq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444130.3300946-290-277019122189263/AnsiballZ_ini_file.py'
Dec 11 09:08:50 compute-0 sudo[55370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:51 compute-0 python3.9[55372]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 11 09:08:51 compute-0 sudo[55370]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:51 compute-0 sudo[55522]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmlhdvcasvzudulsljzdyvephujjkpoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444131.2111456-290-77360258149545/AnsiballZ_ini_file.py'
Dec 11 09:08:51 compute-0 sudo[55522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:51 compute-0 python3.9[55524]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 11 09:08:51 compute-0 sudo[55522]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:52 compute-0 sudo[55674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbfdidnmnduqsegtbyrlgrxoaolzhtmw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444131.9087496-290-143059147463255/AnsiballZ_ini_file.py'
Dec 11 09:08:52 compute-0 sudo[55674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:52 compute-0 python3.9[55676]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 11 09:08:52 compute-0 sudo[55674]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:52 compute-0 sudo[55826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tctyetagkjylhwxuoiqvcytxnidrdumr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444132.5768912-290-145399148329056/AnsiballZ_ini_file.py'
Dec 11 09:08:52 compute-0 sudo[55826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:53 compute-0 python3.9[55828]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 11 09:08:53 compute-0 sudo[55826]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:53 compute-0 sudo[55978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkvbuxemyyagtmquebfeeaayquqkfklx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444133.3078241-383-195898479141793/AnsiballZ_dnf.py'
Dec 11 09:08:53 compute-0 sudo[55978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:53 compute-0 python3.9[55980]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 11 09:08:55 compute-0 sudo[55978]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:56 compute-0 sudo[56131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fljpsffwmiqgoqngzgyenxlkwxsxtltf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444135.8677197-416-206211810190964/AnsiballZ_setup.py'
Dec 11 09:08:56 compute-0 sudo[56131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:56 compute-0 python3.9[56133]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 09:08:56 compute-0 sudo[56131]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:56 compute-0 sudo[56285]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sixjvienzbgfuvupjdnombwumbcfjbfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444136.631016-440-86432209243275/AnsiballZ_stat.py'
Dec 11 09:08:56 compute-0 sudo[56285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:57 compute-0 python3.9[56287]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 11 09:08:57 compute-0 sudo[56285]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:57 compute-0 sudo[56437]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwypqxtrozilvrabuqmlbafcksninmrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444137.320883-467-81713350612804/AnsiballZ_stat.py'
Dec 11 09:08:57 compute-0 sudo[56437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:57 compute-0 python3.9[56439]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 11 09:08:57 compute-0 sudo[56437]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:58 compute-0 sudo[56589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujhtixdglezizbcixjimrbhdibjzwjup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444138.2209406-497-9366189802804/AnsiballZ_command.py'
Dec 11 09:08:58 compute-0 sudo[56589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:58 compute-0 python3.9[56591]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:08:58 compute-0 sudo[56589]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:59 compute-0 sudo[56742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avlvypuqslenfwvxoaxokotiwgnppyvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444139.0560935-527-63395155889979/AnsiballZ_service_facts.py'
Dec 11 09:08:59 compute-0 sudo[56742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:59 compute-0 python3.9[56744]: ansible-service_facts Invoked
Dec 11 09:08:59 compute-0 network[56761]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 11 09:08:59 compute-0 network[56762]: 'network-scripts' will be removed from distribution in near future.
Dec 11 09:08:59 compute-0 network[56763]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 11 09:09:03 compute-0 sudo[56742]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:04 compute-0 sudo[57046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onittryzyyqgdsldpbmqaonuacijnqhy ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1765444144.3159888-572-256065122422110/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1765444144.3159888-572-256065122422110/args'
Dec 11 09:09:04 compute-0 sudo[57046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:04 compute-0 sudo[57046]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:05 compute-0 sudo[57213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqkbjzhzdzyqzstkpfkpzbzrtdsxworu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444145.113695-605-46212827720526/AnsiballZ_dnf.py'
Dec 11 09:09:05 compute-0 sudo[57213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:05 compute-0 python3.9[57215]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 11 09:09:07 compute-0 sudo[57213]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:08 compute-0 sudo[57366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opbgihmuqsgdbezwmcvuobhegrthkiha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444147.9930074-644-108714163369775/AnsiballZ_package_facts.py'
Dec 11 09:09:08 compute-0 sudo[57366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:08 compute-0 python3.9[57368]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Dec 11 09:09:09 compute-0 sudo[57366]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:10 compute-0 sudo[57518]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxzoiugrgykzdqtzntrgqarjrwtuljas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444149.8895452-674-32786393089402/AnsiballZ_stat.py'
Dec 11 09:09:10 compute-0 sudo[57518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:10 compute-0 python3.9[57520]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 09:09:10 compute-0 sudo[57518]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:10 compute-0 sudo[57643]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojmhwvtlismliduvknwmbslfrjgusead ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444149.8895452-674-32786393089402/AnsiballZ_copy.py'
Dec 11 09:09:10 compute-0 sudo[57643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:11 compute-0 python3.9[57645]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765444149.8895452-674-32786393089402/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:09:11 compute-0 sudo[57643]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:11 compute-0 sudo[57797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrnpacmynzdxhziryscneletavtighbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444151.2914073-719-143019300034795/AnsiballZ_stat.py'
Dec 11 09:09:11 compute-0 sudo[57797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:11 compute-0 python3.9[57799]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 09:09:11 compute-0 sudo[57797]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:12 compute-0 sudo[57922]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwmmpjcbvnvuyxucmwtngvvqleueuxmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444151.2914073-719-143019300034795/AnsiballZ_copy.py'
Dec 11 09:09:12 compute-0 sudo[57922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:12 compute-0 python3.9[57924]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765444151.2914073-719-143019300034795/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:09:12 compute-0 sudo[57922]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:13 compute-0 sudo[58076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxqsmxcgrnfziqxpktiyhwkrzlzkwvbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444153.453166-782-72049912611056/AnsiballZ_lineinfile.py'
Dec 11 09:09:13 compute-0 sudo[58076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:14 compute-0 python3.9[58078]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:09:14 compute-0 sudo[58076]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:15 compute-0 sudo[58230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yknocbczexcgtfuzinnkeeyrnjsqfosr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444155.1836035-827-116921515235283/AnsiballZ_setup.py'
Dec 11 09:09:15 compute-0 sudo[58230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:15 compute-0 python3.9[58232]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 11 09:09:16 compute-0 sudo[58230]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:16 compute-0 sudo[58314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnflmdybmkvjowpznavcqjnoewpbwkum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444155.1836035-827-116921515235283/AnsiballZ_systemd.py'
Dec 11 09:09:16 compute-0 sudo[58314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:17 compute-0 python3.9[58316]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 11 09:09:17 compute-0 sudo[58314]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:18 compute-0 sudo[58468]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcvwiyxapmlhtosntmaoopmkafhikdny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444158.0030477-875-169127868122668/AnsiballZ_setup.py'
Dec 11 09:09:18 compute-0 sudo[58468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:18 compute-0 python3.9[58470]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 11 09:09:18 compute-0 sudo[58468]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:19 compute-0 sudo[58552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qafubjxwuybasbaspfeqpplitlzkqkgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444158.0030477-875-169127868122668/AnsiballZ_systemd.py'
Dec 11 09:09:19 compute-0 sudo[58552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:19 compute-0 python3.9[58554]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 11 09:09:19 compute-0 chronyd[794]: chronyd exiting
Dec 11 09:09:19 compute-0 systemd[1]: Stopping NTP client/server...
Dec 11 09:09:19 compute-0 systemd[1]: chronyd.service: Deactivated successfully.
Dec 11 09:09:19 compute-0 systemd[1]: Stopped NTP client/server.
Dec 11 09:09:19 compute-0 systemd[1]: Starting NTP client/server...
Dec 11 09:09:19 compute-0 chronyd[58563]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec 11 09:09:19 compute-0 chronyd[58563]: Frequency -31.501 +/- 2.297 ppm read from /var/lib/chrony/drift
Dec 11 09:09:19 compute-0 chronyd[58563]: Loaded seccomp filter (level 2)
Dec 11 09:09:19 compute-0 systemd[1]: Started NTP client/server.
Dec 11 09:09:19 compute-0 sudo[58552]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:20 compute-0 sshd-session[53613]: Connection closed by 192.168.122.30 port 48242
Dec 11 09:09:20 compute-0 sshd-session[53610]: pam_unix(sshd:session): session closed for user zuul
Dec 11 09:09:20 compute-0 systemd-logind[792]: Session 12 logged out. Waiting for processes to exit.
Dec 11 09:09:20 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Dec 11 09:09:20 compute-0 systemd[1]: session-12.scope: Consumed 27.610s CPU time.
Dec 11 09:09:20 compute-0 systemd-logind[792]: Removed session 12.
Dec 11 09:09:26 compute-0 sshd-session[58589]: Accepted publickey for zuul from 192.168.122.30 port 41788 ssh2: ECDSA SHA256:cT48FffNzE3FSGRebchaTzw3hIqcCIoBfXY30Q2C9bc
Dec 11 09:09:26 compute-0 systemd-logind[792]: New session 13 of user zuul.
Dec 11 09:09:26 compute-0 systemd[1]: Started Session 13 of User zuul.
Dec 11 09:09:26 compute-0 sshd-session[58589]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 11 09:09:27 compute-0 sudo[58742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkjtivzwcpovjffpxvpmqebrbrrmfosk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444166.8752625-26-266954678425714/AnsiballZ_file.py'
Dec 11 09:09:27 compute-0 sudo[58742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:27 compute-0 python3.9[58744]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:09:27 compute-0 sudo[58742]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:28 compute-0 sudo[58894]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iworoikrnxfjapnyshreyjhorelorjln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444167.8186378-62-237038042053272/AnsiballZ_stat.py'
Dec 11 09:09:28 compute-0 sudo[58894]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:28 compute-0 python3.9[58896]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 09:09:28 compute-0 sudo[58894]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:28 compute-0 sudo[59017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-micrapkjsnfqsjhxlnoaunvqjvwvmhdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444167.8186378-62-237038042053272/AnsiballZ_copy.py'
Dec 11 09:09:28 compute-0 sudo[59017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:29 compute-0 python3.9[59019]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765444167.8186378-62-237038042053272/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:09:29 compute-0 sudo[59017]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:29 compute-0 sshd-session[58592]: Connection closed by 192.168.122.30 port 41788
Dec 11 09:09:29 compute-0 sshd-session[58589]: pam_unix(sshd:session): session closed for user zuul
Dec 11 09:09:29 compute-0 systemd-logind[792]: Session 13 logged out. Waiting for processes to exit.
Dec 11 09:09:29 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Dec 11 09:09:29 compute-0 systemd[1]: session-13.scope: Consumed 1.649s CPU time.
Dec 11 09:09:29 compute-0 systemd-logind[792]: Removed session 13.
Dec 11 09:09:35 compute-0 sshd-session[59044]: Accepted publickey for zuul from 192.168.122.30 port 37026 ssh2: ECDSA SHA256:cT48FffNzE3FSGRebchaTzw3hIqcCIoBfXY30Q2C9bc
Dec 11 09:09:35 compute-0 systemd-logind[792]: New session 14 of user zuul.
Dec 11 09:09:35 compute-0 systemd[1]: Started Session 14 of User zuul.
Dec 11 09:09:35 compute-0 sshd-session[59044]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 11 09:09:36 compute-0 python3.9[59197]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 09:09:38 compute-0 sudo[59351]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zayoeprzsloaheokjmifcquhuugivssk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444177.4720194-59-54324348788672/AnsiballZ_file.py'
Dec 11 09:09:38 compute-0 sudo[59351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:38 compute-0 python3.9[59353]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:09:38 compute-0 sudo[59351]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:39 compute-0 sudo[59526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hynioxtkeuabebqvkrgxezyqvdqbcnqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444178.549632-83-118511062972439/AnsiballZ_stat.py'
Dec 11 09:09:39 compute-0 sudo[59526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:39 compute-0 python3.9[59528]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 09:09:39 compute-0 sudo[59526]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:39 compute-0 sudo[59649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsinzhsssnmgiirkajkdrmdgoskadzbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444178.549632-83-118511062972439/AnsiballZ_copy.py'
Dec 11 09:09:39 compute-0 sudo[59649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:39 compute-0 python3.9[59651]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1765444178.549632-83-118511062972439/.source.json _original_basename=.desl8nfn follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:09:39 compute-0 sudo[59649]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:40 compute-0 sudo[59801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iedfxiecxgwtfqqltsndapqirjiovfpj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444180.4617803-152-164479357369167/AnsiballZ_stat.py'
Dec 11 09:09:40 compute-0 sudo[59801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:40 compute-0 python3.9[59803]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 09:09:40 compute-0 sudo[59801]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:41 compute-0 sudo[59924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgjxogcrbjmumyhkfdtfwuprzzwnygok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444180.4617803-152-164479357369167/AnsiballZ_copy.py'
Dec 11 09:09:41 compute-0 sudo[59924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:41 compute-0 python3.9[59926]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765444180.4617803-152-164479357369167/.source _original_basename=.9r6r6fww follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:09:41 compute-0 sudo[59924]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:42 compute-0 sudo[60076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixvuxizmdpsrbghwjzdwzreaazagrjlb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444181.790067-200-67330946495890/AnsiballZ_file.py'
Dec 11 09:09:42 compute-0 sudo[60076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:42 compute-0 python3.9[60078]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 11 09:09:42 compute-0 sudo[60076]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:42 compute-0 sudo[60228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhzhjbixnnjphbakmzmiyfyardtkjugr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444182.4542935-224-210330505059789/AnsiballZ_stat.py'
Dec 11 09:09:42 compute-0 sudo[60228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:42 compute-0 python3.9[60230]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 09:09:42 compute-0 sudo[60228]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:43 compute-0 sudo[60351]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbuaexgeyduifgkwtjlzadzxnsnexajp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444182.4542935-224-210330505059789/AnsiballZ_copy.py'
Dec 11 09:09:43 compute-0 sudo[60351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:43 compute-0 python3.9[60353]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765444182.4542935-224-210330505059789/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 11 09:09:43 compute-0 sudo[60351]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:43 compute-0 sudo[60503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovumvasfmofolhmgcebkvbiygrmjpaxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444183.627073-224-26105922539915/AnsiballZ_stat.py'
Dec 11 09:09:43 compute-0 sudo[60503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:44 compute-0 python3.9[60505]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 09:09:44 compute-0 sudo[60503]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:44 compute-0 sudo[60626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srgmjjgnrsctntvuetkuudpkzqcgcwdu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444183.627073-224-26105922539915/AnsiballZ_copy.py'
Dec 11 09:09:44 compute-0 sudo[60626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:44 compute-0 python3.9[60628]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765444183.627073-224-26105922539915/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 11 09:09:44 compute-0 sudo[60626]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:45 compute-0 sudo[60778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yazpjllbvfklmwkfkwlmosrzbixnwamt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444184.8224857-311-150259625366874/AnsiballZ_file.py'
Dec 11 09:09:45 compute-0 sudo[60778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:45 compute-0 python3.9[60780]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:09:45 compute-0 sudo[60778]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:45 compute-0 sudo[60930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzoefxpppqrwndrjtiknwxftusoejdvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444185.4929533-335-184203763387939/AnsiballZ_stat.py'
Dec 11 09:09:45 compute-0 sudo[60930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:45 compute-0 python3.9[60932]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 09:09:45 compute-0 sudo[60930]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:46 compute-0 sudo[61053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfwchznyzabeuhsgfbwbgvdpauwjheuw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444185.4929533-335-184203763387939/AnsiballZ_copy.py'
Dec 11 09:09:46 compute-0 sudo[61053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:46 compute-0 python3.9[61055]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765444185.4929533-335-184203763387939/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:09:46 compute-0 sudo[61053]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:47 compute-0 sudo[61205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnptwooujzzzqnwkhvxpiixyvgqbifjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444186.804324-380-262711649950385/AnsiballZ_stat.py'
Dec 11 09:09:47 compute-0 sudo[61205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:47 compute-0 python3.9[61207]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 09:09:47 compute-0 sudo[61205]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:47 compute-0 sudo[61328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmvzchquytovrsyxkzchwqltoqmxoeqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444186.804324-380-262711649950385/AnsiballZ_copy.py'
Dec 11 09:09:47 compute-0 sudo[61328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:47 compute-0 python3.9[61330]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765444186.804324-380-262711649950385/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:09:47 compute-0 sudo[61328]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:48 compute-0 sudo[61480]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhsviiatnctkvthjqbnswfbncjbtlufb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444188.0575633-425-242660560456912/AnsiballZ_systemd.py'
Dec 11 09:09:48 compute-0 sudo[61480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:48 compute-0 python3.9[61482]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 11 09:09:48 compute-0 systemd[1]: Reloading.
Dec 11 09:09:49 compute-0 systemd-rc-local-generator[61507]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:09:49 compute-0 systemd-sysv-generator[61510]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:09:49 compute-0 systemd[1]: Reloading.
Dec 11 09:09:49 compute-0 systemd-rc-local-generator[61544]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:09:49 compute-0 systemd-sysv-generator[61548]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:09:49 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Dec 11 09:09:49 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Dec 11 09:09:49 compute-0 sudo[61480]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:49 compute-0 sudo[61706]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kudxiqpqvyrxbliyeifrsefwipomgfif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444189.654631-449-20362214805888/AnsiballZ_stat.py'
Dec 11 09:09:49 compute-0 sudo[61706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:50 compute-0 python3.9[61708]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 09:09:50 compute-0 sudo[61706]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:50 compute-0 sudo[61829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgjfmkjhzqsheohtyzvnviyjjfxginak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444189.654631-449-20362214805888/AnsiballZ_copy.py'
Dec 11 09:09:50 compute-0 sudo[61829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:50 compute-0 python3.9[61831]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765444189.654631-449-20362214805888/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:09:50 compute-0 sudo[61829]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:51 compute-0 sudo[61981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxuchfwepgfkzclunrfmkuesnqtjjjsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444190.907654-494-83336387347079/AnsiballZ_stat.py'
Dec 11 09:09:51 compute-0 sudo[61981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:51 compute-0 python3.9[61983]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 09:09:51 compute-0 sudo[61981]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:51 compute-0 sudo[62104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofrrsqynphjeitgxnujjbcnsbheertyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444190.907654-494-83336387347079/AnsiballZ_copy.py'
Dec 11 09:09:51 compute-0 sudo[62104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:51 compute-0 python3.9[62106]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765444190.907654-494-83336387347079/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:09:51 compute-0 sudo[62104]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:52 compute-0 sudo[62256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atuggahiwkkgiipbacryfktjpysyuaib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444192.2441008-539-127106278306483/AnsiballZ_systemd.py'
Dec 11 09:09:52 compute-0 sudo[62256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:53 compute-0 python3.9[62258]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 11 09:09:53 compute-0 systemd[1]: Reloading.
Dec 11 09:09:53 compute-0 systemd-rc-local-generator[62287]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:09:53 compute-0 systemd-sysv-generator[62290]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:09:53 compute-0 systemd[1]: Reloading.
Dec 11 09:09:53 compute-0 systemd-rc-local-generator[62322]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:09:53 compute-0 systemd-sysv-generator[62326]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:09:53 compute-0 systemd[1]: Starting Create netns directory...
Dec 11 09:09:53 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 11 09:09:53 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 11 09:09:53 compute-0 systemd[1]: Finished Create netns directory.
Dec 11 09:09:53 compute-0 sudo[62256]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:54 compute-0 python3.9[62484]: ansible-ansible.builtin.service_facts Invoked
Dec 11 09:09:54 compute-0 network[62501]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 11 09:09:54 compute-0 network[62502]: 'network-scripts' will be removed from distribution in near future.
Dec 11 09:09:54 compute-0 network[62503]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 11 09:09:57 compute-0 sudo[62763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aydvxirztrbpsydzxkmalggnbtjalhkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444197.3385549-587-34347578294772/AnsiballZ_systemd.py'
Dec 11 09:09:57 compute-0 sudo[62763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:57 compute-0 python3.9[62765]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 11 09:09:57 compute-0 systemd[1]: Reloading.
Dec 11 09:09:58 compute-0 systemd-sysv-generator[62797]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:09:58 compute-0 systemd-rc-local-generator[62793]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:09:58 compute-0 systemd[1]: Stopping IPv4 firewall with iptables...
Dec 11 09:09:58 compute-0 iptables.init[62805]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Dec 11 09:09:58 compute-0 iptables.init[62805]: iptables: Flushing firewall rules: [  OK  ]
Dec 11 09:09:58 compute-0 systemd[1]: iptables.service: Deactivated successfully.
Dec 11 09:09:58 compute-0 systemd[1]: Stopped IPv4 firewall with iptables.
Dec 11 09:09:58 compute-0 sudo[62763]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:58 compute-0 sudo[62999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfhrkmnmcyezxypkunlpvvuobqlpchee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444198.695802-587-55440933802062/AnsiballZ_systemd.py'
Dec 11 09:09:58 compute-0 sudo[62999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:59 compute-0 python3.9[63001]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 11 09:09:59 compute-0 sudo[62999]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:59 compute-0 sudo[63153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eitorgsplflpqfcmuthkcwqpfydpbcpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444199.7348254-635-147780600486832/AnsiballZ_systemd.py'
Dec 11 09:10:00 compute-0 sudo[63153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:00 compute-0 python3.9[63155]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 11 09:10:00 compute-0 systemd[1]: Reloading.
Dec 11 09:10:00 compute-0 systemd-rc-local-generator[63182]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:10:00 compute-0 systemd-sysv-generator[63185]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:10:00 compute-0 systemd[1]: Starting Netfilter Tables...
Dec 11 09:10:00 compute-0 systemd[1]: Finished Netfilter Tables.
Dec 11 09:10:00 compute-0 sudo[63153]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:01 compute-0 sudo[63344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tuopbjomklbafltxjqcdjnbskipftlfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444200.8370473-659-215444843096089/AnsiballZ_command.py'
Dec 11 09:10:01 compute-0 sudo[63344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:01 compute-0 python3.9[63346]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:10:01 compute-0 sudo[63344]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:02 compute-0 sudo[63497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjbajjojwjzbgprznqoaykglhvfvwpjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444202.0418906-701-80058364134219/AnsiballZ_stat.py'
Dec 11 09:10:02 compute-0 sudo[63497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:02 compute-0 python3.9[63499]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 09:10:02 compute-0 sudo[63497]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:02 compute-0 sudo[63622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbcfbhmbswdbeqqpuzkmtowonnqbvfzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444202.0418906-701-80058364134219/AnsiballZ_copy.py'
Dec 11 09:10:02 compute-0 sudo[63622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:03 compute-0 python3.9[63624]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765444202.0418906-701-80058364134219/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:10:03 compute-0 sudo[63622]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:03 compute-0 sudo[63775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvyddobaegaakydpjlhyxvmarncnmnpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444203.3101645-746-187371162080865/AnsiballZ_systemd.py'
Dec 11 09:10:03 compute-0 sudo[63775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:03 compute-0 python3.9[63777]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 11 09:10:03 compute-0 systemd[1]: Reloading OpenSSH server daemon...
Dec 11 09:10:03 compute-0 sshd[1011]: Received SIGHUP; restarting.
Dec 11 09:10:03 compute-0 systemd[1]: Reloaded OpenSSH server daemon.
Dec 11 09:10:03 compute-0 sshd[1011]: Server listening on 0.0.0.0 port 22.
Dec 11 09:10:03 compute-0 sshd[1011]: Server listening on :: port 22.
Dec 11 09:10:03 compute-0 sudo[63775]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:04 compute-0 sudo[63931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcidcouwrxebjzapwkbvgusazqntauzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444204.1953878-770-136630082139451/AnsiballZ_file.py'
Dec 11 09:10:04 compute-0 sudo[63931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:04 compute-0 python3.9[63933]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:10:04 compute-0 sudo[63931]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:05 compute-0 sudo[64083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djherapugffvztkvlppyajauxsuitula ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444204.8332596-794-160254821138209/AnsiballZ_stat.py'
Dec 11 09:10:05 compute-0 sudo[64083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:05 compute-0 python3.9[64085]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 09:10:05 compute-0 sudo[64083]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:05 compute-0 sudo[64206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azwkiqufymlyfxtlqmwimbpbjigbsxks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444204.8332596-794-160254821138209/AnsiballZ_copy.py'
Dec 11 09:10:05 compute-0 sudo[64206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:05 compute-0 python3.9[64208]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765444204.8332596-794-160254821138209/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:10:05 compute-0 sudo[64206]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:06 compute-0 sudo[64358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnkmverufpnjjdkofdzrbbakprbcqbgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444206.309522-848-55873278019288/AnsiballZ_timezone.py'
Dec 11 09:10:06 compute-0 sudo[64358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:06 compute-0 python3.9[64360]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec 11 09:10:06 compute-0 systemd[1]: Starting Time & Date Service...
Dec 11 09:10:07 compute-0 systemd[1]: Started Time & Date Service.
Dec 11 09:10:07 compute-0 sudo[64358]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:07 compute-0 sudo[64514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skthnyteyjurrsguzgtirmfqrivtehhg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444207.3361223-875-264994757133512/AnsiballZ_file.py'
Dec 11 09:10:07 compute-0 sudo[64514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:07 compute-0 python3.9[64516]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:10:07 compute-0 sudo[64514]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:08 compute-0 sudo[64666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvfwjxlzpxjjirynfddyvoxgnxupdidc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444208.0840333-899-7452496531943/AnsiballZ_stat.py'
Dec 11 09:10:08 compute-0 sudo[64666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:08 compute-0 python3.9[64668]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 09:10:08 compute-0 sudo[64666]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:08 compute-0 sudo[64789]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxtniwmkswwskewtlwzqcxyvksbtbmqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444208.0840333-899-7452496531943/AnsiballZ_copy.py'
Dec 11 09:10:08 compute-0 sudo[64789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:09 compute-0 python3.9[64791]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765444208.0840333-899-7452496531943/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:10:09 compute-0 sudo[64789]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:09 compute-0 sudo[64941]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwneghfdaybxlylagqhxlhtpsqmewggi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444209.3930206-944-89021947503207/AnsiballZ_stat.py'
Dec 11 09:10:09 compute-0 sudo[64941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:09 compute-0 python3.9[64943]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 09:10:09 compute-0 sudo[64941]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:10 compute-0 sudo[65064]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hywwotpgdubybehuhofobgbdheakywtb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444209.3930206-944-89021947503207/AnsiballZ_copy.py'
Dec 11 09:10:10 compute-0 sudo[65064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:10 compute-0 python3.9[65066]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765444209.3930206-944-89021947503207/.source.yaml _original_basename=.akdh0xnw follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:10:10 compute-0 sudo[65064]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:10 compute-0 sudo[65216]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyvkimwiukikveiprtmwdcijvxzmrgsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444210.6192064-989-189444683782729/AnsiballZ_stat.py'
Dec 11 09:10:10 compute-0 sudo[65216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:11 compute-0 python3.9[65218]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 09:10:11 compute-0 sudo[65216]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:11 compute-0 sudo[65339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tihtpojzcszokhooegkxmnxaghyhydoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444210.6192064-989-189444683782729/AnsiballZ_copy.py'
Dec 11 09:10:11 compute-0 sudo[65339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:12 compute-0 python3.9[65341]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765444210.6192064-989-189444683782729/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:10:12 compute-0 sudo[65339]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:12 compute-0 sudo[65491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rctxjfokvrxzhgnwetdmqcrswsawyxzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444212.4690254-1034-278158685294224/AnsiballZ_command.py'
Dec 11 09:10:12 compute-0 sudo[65491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:12 compute-0 python3.9[65493]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:10:12 compute-0 sudo[65491]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:13 compute-0 sudo[65644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhjumypmoruwdzbawfrusopmmmffvdaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444213.1290975-1058-257675256755036/AnsiballZ_command.py'
Dec 11 09:10:13 compute-0 sudo[65644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:13 compute-0 python3.9[65646]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:10:13 compute-0 sudo[65644]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:14 compute-0 sudo[65797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-euclqabknuczqxsvmnasdclylvnqxaic ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765444214.0179708-1082-76332211307557/AnsiballZ_edpm_nftables_from_files.py'
Dec 11 09:10:14 compute-0 sudo[65797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:14 compute-0 python3[65799]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 11 09:10:14 compute-0 sudo[65797]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:15 compute-0 sudo[65949]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfirgicqcqwxsawvkptiiqtkwfbclijq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444214.8215635-1106-196926258634813/AnsiballZ_stat.py'
Dec 11 09:10:15 compute-0 sudo[65949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:15 compute-0 python3.9[65951]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 09:10:15 compute-0 sudo[65949]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:15 compute-0 sudo[66072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djegnetzboiasbpekfmacfzseqieolmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444214.8215635-1106-196926258634813/AnsiballZ_copy.py'
Dec 11 09:10:15 compute-0 sudo[66072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:15 compute-0 python3.9[66074]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765444214.8215635-1106-196926258634813/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:10:15 compute-0 sudo[66072]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:16 compute-0 sudo[66224]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkezxccyayuiebmnhchfdnupmhdysfqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444216.1538904-1151-247439559858974/AnsiballZ_stat.py'
Dec 11 09:10:16 compute-0 sudo[66224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:16 compute-0 python3.9[66226]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 09:10:16 compute-0 sudo[66224]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:16 compute-0 sudo[66347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-moaviifmmvkhwsxmwiiitbayxppvjumt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444216.1538904-1151-247439559858974/AnsiballZ_copy.py'
Dec 11 09:10:16 compute-0 sudo[66347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:17 compute-0 python3.9[66349]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765444216.1538904-1151-247439559858974/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:10:17 compute-0 sudo[66347]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:17 compute-0 sudo[66499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbmymxpdidtcfvphrsvjiucwoaxxnmob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444217.528599-1196-15421493642315/AnsiballZ_stat.py'
Dec 11 09:10:17 compute-0 sudo[66499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:18 compute-0 python3.9[66501]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 09:10:18 compute-0 sudo[66499]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:18 compute-0 sudo[66622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlblcdpfltxthzdosnnqdipqhukssxau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444217.528599-1196-15421493642315/AnsiballZ_copy.py'
Dec 11 09:10:18 compute-0 sudo[66622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:18 compute-0 python3.9[66624]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765444217.528599-1196-15421493642315/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:10:18 compute-0 sudo[66622]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:19 compute-0 sudo[66774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbfkpfrqcrnsqtqaogetuothlskwjyax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444218.826039-1241-121265425811659/AnsiballZ_stat.py'
Dec 11 09:10:19 compute-0 sudo[66774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:19 compute-0 python3.9[66776]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 09:10:19 compute-0 sudo[66774]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:19 compute-0 sudo[66897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cznkbcbpbpmavufvtojnemokyhjxnbls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444218.826039-1241-121265425811659/AnsiballZ_copy.py'
Dec 11 09:10:19 compute-0 sudo[66897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:19 compute-0 python3.9[66899]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765444218.826039-1241-121265425811659/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:10:19 compute-0 sudo[66897]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:20 compute-0 sudo[67049]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjksjrvmcerauvopxcrdvixaufqgygwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444220.226721-1286-138451484244268/AnsiballZ_stat.py'
Dec 11 09:10:20 compute-0 sudo[67049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:20 compute-0 python3.9[67051]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 09:10:20 compute-0 sudo[67049]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:21 compute-0 sudo[67172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irdrvlzeyrcipvkhdqnvcimfrdeorsdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444220.226721-1286-138451484244268/AnsiballZ_copy.py'
Dec 11 09:10:21 compute-0 sudo[67172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:21 compute-0 python3.9[67174]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765444220.226721-1286-138451484244268/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:10:21 compute-0 sudo[67172]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:21 compute-0 sudo[67324]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufmxwpvnnzfbpezeqxicmhxkczjdsfph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444221.6596081-1331-90470972189401/AnsiballZ_file.py'
Dec 11 09:10:21 compute-0 sudo[67324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:22 compute-0 python3.9[67326]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:10:22 compute-0 sudo[67324]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:23 compute-0 sudo[67476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esvqdsekpxqgobcrqlokcunxgtadfeqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444222.9255714-1355-277937939789961/AnsiballZ_command.py'
Dec 11 09:10:23 compute-0 sudo[67476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:23 compute-0 python3.9[67478]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:10:23 compute-0 sudo[67476]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:24 compute-0 sudo[67635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okyynedkuvmdpjftjsyblsbxdqkqnlvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444223.647648-1379-151276744618677/AnsiballZ_blockinfile.py'
Dec 11 09:10:24 compute-0 sudo[67635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:24 compute-0 python3.9[67637]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:10:24 compute-0 sudo[67635]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:24 compute-0 sudo[67788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmbkweuxyfjsftudpbleahncunnouleh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444224.61125-1406-52108385614103/AnsiballZ_file.py'
Dec 11 09:10:24 compute-0 sudo[67788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:25 compute-0 python3.9[67790]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:10:25 compute-0 sudo[67788]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:25 compute-0 sudo[67940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-manlnpbgsoojdazxeaxqobeytrlgynyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444225.3085656-1406-206368515377304/AnsiballZ_file.py'
Dec 11 09:10:25 compute-0 sudo[67940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:25 compute-0 python3.9[67942]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:10:25 compute-0 sudo[67940]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:26 compute-0 sudo[68092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orgreezevsuqpzldiamjiwyvhdszaqqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444226.0393443-1451-190595108443483/AnsiballZ_mount.py'
Dec 11 09:10:26 compute-0 sudo[68092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:26 compute-0 python3.9[68094]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 11 09:10:26 compute-0 sudo[68092]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:27 compute-0 sudo[68245]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgefatcksljiynqovhmqdnqdwwbzxwzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444227.0735278-1451-176069832895449/AnsiballZ_mount.py'
Dec 11 09:10:27 compute-0 sudo[68245]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:27 compute-0 python3.9[68247]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 11 09:10:27 compute-0 sudo[68245]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:28 compute-0 sshd-session[59047]: Connection closed by 192.168.122.30 port 37026
Dec 11 09:10:28 compute-0 sshd-session[59044]: pam_unix(sshd:session): session closed for user zuul
Dec 11 09:10:28 compute-0 systemd[1]: session-14.scope: Deactivated successfully.
Dec 11 09:10:28 compute-0 systemd[1]: session-14.scope: Consumed 36.624s CPU time.
Dec 11 09:10:28 compute-0 systemd-logind[792]: Session 14 logged out. Waiting for processes to exit.
Dec 11 09:10:28 compute-0 systemd-logind[792]: Removed session 14.
Dec 11 09:10:33 compute-0 sshd-session[68273]: Accepted publickey for zuul from 192.168.122.30 port 39196 ssh2: ECDSA SHA256:cT48FffNzE3FSGRebchaTzw3hIqcCIoBfXY30Q2C9bc
Dec 11 09:10:33 compute-0 systemd-logind[792]: New session 15 of user zuul.
Dec 11 09:10:33 compute-0 systemd[1]: Started Session 15 of User zuul.
Dec 11 09:10:33 compute-0 sshd-session[68273]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 11 09:10:34 compute-0 sudo[68426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tuodwntsbsizkvdmxmmsnhqlkspziiyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444233.8247676-18-177952455903214/AnsiballZ_tempfile.py'
Dec 11 09:10:34 compute-0 sudo[68426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:34 compute-0 python3.9[68428]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Dec 11 09:10:34 compute-0 sudo[68426]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:35 compute-0 sudo[68578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oowtzthsmwivzhustzxfwdtqdplnmeyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444234.8032985-54-61442880799101/AnsiballZ_stat.py'
Dec 11 09:10:35 compute-0 sudo[68578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:35 compute-0 python3.9[68580]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 11 09:10:35 compute-0 sudo[68578]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:36 compute-0 sudo[68730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnwdutxscpsqstpywwwrqyezyvnymges ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444235.8012319-84-96710578799687/AnsiballZ_setup.py'
Dec 11 09:10:36 compute-0 sudo[68730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:36 compute-0 python3.9[68732]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 09:10:36 compute-0 sudo[68730]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:37 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 11 09:10:37 compute-0 sudo[68884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbiqgemnkxblrutxmhozdjfeivbuenpy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444236.9641325-109-144328175608943/AnsiballZ_blockinfile.py'
Dec 11 09:10:37 compute-0 sudo[68884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:37 compute-0 python3.9[68886]: ansible-ansible.builtin.blockinfile Invoked with block=compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0qJprbeWE9gzziBi8iIuZ5/k4Y6VfsPefjRYND6OZTash940tra+OExym0WgX87tl0p8X5af7e5kx9ApSRGaDIhv1rHPZ/IiVWkI2kY4RRTIVBCxGHLfXtRDD8GaQmG8fFQddHbPFCjIrFu10YCvPF16Y/Mt5nOPq9lYkZmzorw5wMqD/xgf9v/jVXY4cCyAfeF3XiDAXmPuUspWmwPvdtRAUxIhnReR7164EpwboPcrrTUPSgIDy/z0IM0qJgoQ5hS0fQs87Lc2HEmz2jJE1uejxE+/SCuhA7bx2aqd0z6ijsdztI0+Ysu7VJZSTCPCK80fl+1QGZ3bcudNTf1ibWIpUBfFF5KKXvx9lt946bSY87rBt2xRrhZmMtEWNHnsJlN2tx2VUFi5u1V3mlluQv2KcMkQ+DwGqSZNPpnMabP6sjsRCedR5UNCGpkAlQnEnZjbOKbpjaFMHebuCOZnfOFv9p8gP2mks6D+rd9XoG4A1Q50GxFGJOb27xFQj+PU=
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAKW1m3VuEqNuliezDXOtl5vBx95kHlieh9m8cBF4J2o
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAm4ND7jopGSwcS22rsdD1j27W4muhBhl+dlyzQbKJ8MgLCf5CxN7Ilf4gHl9+Edxlr9w64sx8AeNQLQfGb3frU=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCuuKiq2V+he/ThrupEg3hiw62Bbz9UDb8TfczhWXsP9EngwA1Jx/TgF8INdtvkM8aT2r29ZIj28UjCHuUomkDdsSnN+WYIg4rfwHhoSKEjqK2xAsoN4ad3ZHz+3NzrP80ZibBpikxrE5Qa8M77zrDVCBgV/IX9I7NJygz+xwc6+IqBXOrUx3SXRNm+puno1NmCR355gAqooVxq+pIUeFp0ON5zM29qtrmQz9gTnTiTsOiUp0yzsLQnImSKAlgYndcILPzXzIy1tFDPaCuZdYzkt6BOw/8+fUG4gOyU6r3utbIjMYM+fQNNM/quSFClwXT7oiIBe1C7LalTUlkeI9YRJf/10RXdv+DvLzNgWlDFqQu6kBT5EzIiVVzrX/osRYH29QFP0Gt8Js/MaqtQMycfQnBVr+L5lAIZfgXbLq2JHM0jsdAYqBxxXYlMffZlmFlNwQisoxilTLioxz0Vkbgaqn/Tooh2xUViJZHWmklWtkd1xAz1PIouEjHE6++Ayfk=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAlhtIw9BjEZYtqGZQGR+gBrkihlR53uEdNIOYiXwd9H
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEyaxw/KDB0GRXzjHyE2amKF8n2drpoIGhmK9B0c/hWAAxCsGGS8XxxI4TjpSvDMNoz37cKcT3e8SbrO95NheEU=
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDfGmKdzbx9cziI2UInDx8Z8fdzrd4qGuM081+Q0LHCe/Uk3h1ehAI1eUAf55tKHkk7aA7WMefWDMJXZJD8OlLu0uwUt/qwmp8vU7Sa7LCTdMAV/+urhPVNiqLPyPRwLzQ8ooHPQVSd3DDUkbDTAiKPSwO6O41joP9vi3IiLQHV1ia4HLL6Xid6QQ2PXQaEvs9MNuBFmnmLE+0TyGk8DHTsTUDMWJIBOkUdsR3XFYsT28eLJVM6jpVEok+DQtuxUXUBhExRj044h8jLEdduzFJ8bXYkarcYE6BCGWFuxu6ukIWhN6vUleOQraHHlY1T6+I3oqdV8R1aIFg88wb+2AH6sICyeeMqDKylfxNM1h3YfvBibBqUygE6fcOqd9PQ7itlcqq1fyAJCXf1pORVUCsOF0hMoq8KULzeXqK6YyY1XmhUHan5BJk7yuRW3a3opcDHyU/A8Oo/SDcTsH8KPdScZE/WMcfFH3l5hWSguT84BC9B3+EheVHGGOPoCbX+tSs=
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILR/pY0/ZxqiLi0s+th9yq8tTKO1MwQXuTHHzr6rD8dL
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCCgLzqUdyzTsV/93pYNa7b//8jw8BJ7ijBVPNT1InrXl2EFJm3ZdwP+GHug2pMLz0UjwWUesGsid8zCMbx1Gto=
                                             create=True mode=0644 path=/tmp/ansible.vq4miclq state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:10:37 compute-0 sudo[68884]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:38 compute-0 sudo[69036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzknizrqpoqtkngvsxtzvahrdcswohyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444237.7974336-133-131759007093706/AnsiballZ_command.py'
Dec 11 09:10:38 compute-0 sudo[69036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:38 compute-0 python3.9[69038]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.vq4miclq' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:10:38 compute-0 sudo[69036]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:39 compute-0 sudo[69190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfxxqduzuvmfcvhjtlvbxxwmnxddfacp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444238.6343865-157-46332373636132/AnsiballZ_file.py'
Dec 11 09:10:39 compute-0 sudo[69190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:39 compute-0 python3.9[69192]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.vq4miclq state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:10:39 compute-0 sudo[69190]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:39 compute-0 sshd-session[68276]: Connection closed by 192.168.122.30 port 39196
Dec 11 09:10:39 compute-0 sshd-session[68273]: pam_unix(sshd:session): session closed for user zuul
Dec 11 09:10:39 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Dec 11 09:10:39 compute-0 systemd[1]: session-15.scope: Consumed 3.640s CPU time.
Dec 11 09:10:39 compute-0 systemd-logind[792]: Session 15 logged out. Waiting for processes to exit.
Dec 11 09:10:39 compute-0 systemd-logind[792]: Removed session 15.
Dec 11 09:10:46 compute-0 sshd-session[69218]: Accepted publickey for zuul from 192.168.122.30 port 55816 ssh2: ECDSA SHA256:cT48FffNzE3FSGRebchaTzw3hIqcCIoBfXY30Q2C9bc
Dec 11 09:10:46 compute-0 systemd-logind[792]: New session 16 of user zuul.
Dec 11 09:10:46 compute-0 systemd[1]: Started Session 16 of User zuul.
Dec 11 09:10:46 compute-0 sshd-session[69218]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 11 09:10:47 compute-0 python3.9[69371]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 09:10:48 compute-0 sudo[69525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzfvobrthjbbdruunssstwauizuofrxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444247.5374393-56-27964118111206/AnsiballZ_systemd.py'
Dec 11 09:10:48 compute-0 sudo[69525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:48 compute-0 python3.9[69527]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec 11 09:10:48 compute-0 sudo[69525]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:48 compute-0 sudo[69679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izgqlczjsbnqgoguxalrflvoizxendxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444248.6471193-80-277755257132051/AnsiballZ_systemd.py'
Dec 11 09:10:48 compute-0 sudo[69679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:49 compute-0 python3.9[69681]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 11 09:10:49 compute-0 sudo[69679]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:49 compute-0 sudo[69832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqbancwszmyyrnvxpbomhakqakzxajyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444249.4970114-107-271586551913114/AnsiballZ_command.py'
Dec 11 09:10:49 compute-0 sudo[69832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:50 compute-0 python3.9[69834]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:10:50 compute-0 sudo[69832]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:51 compute-0 sudo[69985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zajmcxjujrjwbrwgzecnswdnatdvsfxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444251.1257-131-126254149667635/AnsiballZ_stat.py'
Dec 11 09:10:51 compute-0 sudo[69985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:51 compute-0 python3.9[69987]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 11 09:10:51 compute-0 sudo[69985]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:52 compute-0 sudo[70139]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iiwieqgxvcaluovjsjwnoxpgqpcxonhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444251.9774578-155-183625055533908/AnsiballZ_command.py'
Dec 11 09:10:52 compute-0 sudo[70139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:52 compute-0 python3.9[70141]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:10:52 compute-0 sudo[70139]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:53 compute-0 sudo[70294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqvvjpekxyiirxdnzowbilftzrtjhrcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444252.7451932-179-84813433498779/AnsiballZ_file.py'
Dec 11 09:10:53 compute-0 sudo[70294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:53 compute-0 python3.9[70296]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:10:53 compute-0 sudo[70294]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:53 compute-0 sshd-session[69221]: Connection closed by 192.168.122.30 port 55816
Dec 11 09:10:53 compute-0 sshd-session[69218]: pam_unix(sshd:session): session closed for user zuul
Dec 11 09:10:53 compute-0 systemd-logind[792]: Session 16 logged out. Waiting for processes to exit.
Dec 11 09:10:53 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Dec 11 09:10:53 compute-0 systemd[1]: session-16.scope: Consumed 4.597s CPU time.
Dec 11 09:10:53 compute-0 systemd-logind[792]: Removed session 16.
Dec 11 09:10:58 compute-0 sshd-session[70322]: Accepted publickey for zuul from 192.168.122.30 port 41426 ssh2: ECDSA SHA256:cT48FffNzE3FSGRebchaTzw3hIqcCIoBfXY30Q2C9bc
Dec 11 09:10:58 compute-0 systemd-logind[792]: New session 17 of user zuul.
Dec 11 09:10:58 compute-0 systemd[1]: Started Session 17 of User zuul.
Dec 11 09:10:58 compute-0 sshd-session[70322]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 11 09:11:00 compute-0 python3.9[70475]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 09:11:01 compute-0 sudo[70629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxbeolciziczphpbsapyczdnutdiekka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444260.8039727-62-145431374623597/AnsiballZ_setup.py'
Dec 11 09:11:01 compute-0 sudo[70629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:11:01 compute-0 python3.9[70631]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 11 09:11:01 compute-0 sudo[70629]: pam_unix(sudo:session): session closed for user root
Dec 11 09:11:02 compute-0 sudo[70713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muysyvdssdvbkrlquneshbkysedymfaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444260.8039727-62-145431374623597/AnsiballZ_dnf.py'
Dec 11 09:11:02 compute-0 sudo[70713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:11:02 compute-0 python3.9[70715]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 11 09:11:04 compute-0 sudo[70713]: pam_unix(sudo:session): session closed for user root
Dec 11 09:11:05 compute-0 python3.9[70866]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:11:07 compute-0 python3.9[71017]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 11 09:11:08 compute-0 python3.9[71167]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 11 09:11:08 compute-0 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 11 09:11:09 compute-0 python3.9[71318]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 11 09:11:09 compute-0 sshd-session[70325]: Connection closed by 192.168.122.30 port 41426
Dec 11 09:11:09 compute-0 sshd-session[70322]: pam_unix(sshd:session): session closed for user zuul
Dec 11 09:11:09 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Dec 11 09:11:09 compute-0 systemd[1]: session-17.scope: Consumed 6.303s CPU time.
Dec 11 09:11:09 compute-0 systemd-logind[792]: Session 17 logged out. Waiting for processes to exit.
Dec 11 09:11:09 compute-0 systemd-logind[792]: Removed session 17.
Dec 11 09:11:18 compute-0 sshd-session[71343]: Accepted publickey for zuul from 38.102.83.179 port 41478 ssh2: RSA SHA256:Y1EkKFCM2AxcqFrasoatI/7noXQ4Hq5V3b6Fo5AKQhU
Dec 11 09:11:18 compute-0 systemd-logind[792]: New session 18 of user zuul.
Dec 11 09:11:18 compute-0 systemd[1]: Started Session 18 of User zuul.
Dec 11 09:11:18 compute-0 sshd-session[71343]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 11 09:11:18 compute-0 sudo[71419]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnbndlgebhchwkfusnxqaquczbsvfglw ; /usr/bin/python3'
Dec 11 09:11:18 compute-0 sudo[71419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:11:18 compute-0 useradd[71423]: new group: name=ceph-admin, GID=42478
Dec 11 09:11:18 compute-0 useradd[71423]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Dec 11 09:11:19 compute-0 sudo[71419]: pam_unix(sudo:session): session closed for user root
Dec 11 09:11:19 compute-0 sudo[71505]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hugmbftupomdntjmksdqzmvaxlircopm ; /usr/bin/python3'
Dec 11 09:11:19 compute-0 sudo[71505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:11:19 compute-0 sudo[71505]: pam_unix(sudo:session): session closed for user root
Dec 11 09:11:19 compute-0 sudo[71578]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkgsafcthcdayableiwjfiuzakhlhuen ; /usr/bin/python3'
Dec 11 09:11:19 compute-0 sudo[71578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:11:20 compute-0 sudo[71578]: pam_unix(sudo:session): session closed for user root
Dec 11 09:11:20 compute-0 sudo[71628]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-laufkkmpicqxyfsqmmouinoyzhkhlxmm ; /usr/bin/python3'
Dec 11 09:11:20 compute-0 sudo[71628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:11:20 compute-0 sudo[71628]: pam_unix(sudo:session): session closed for user root
Dec 11 09:11:20 compute-0 sudo[71654]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwxvatuymxbcirkizkfvwtqwjoiktcgn ; /usr/bin/python3'
Dec 11 09:11:20 compute-0 sudo[71654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:11:20 compute-0 sudo[71654]: pam_unix(sudo:session): session closed for user root
Dec 11 09:11:21 compute-0 sudo[71680]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezxybnctixrmeovovomyyhgaewwwcelt ; /usr/bin/python3'
Dec 11 09:11:21 compute-0 sudo[71680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:11:21 compute-0 sudo[71680]: pam_unix(sudo:session): session closed for user root
Dec 11 09:11:21 compute-0 sudo[71706]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yiusbpbmzhdfkdxrqucpxepmzwdmgrtj ; /usr/bin/python3'
Dec 11 09:11:21 compute-0 sudo[71706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:11:21 compute-0 sudo[71706]: pam_unix(sudo:session): session closed for user root
Dec 11 09:11:22 compute-0 sudo[71784]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tuwnuwvbgqlkyjapllgcvdjxrmpovblv ; /usr/bin/python3'
Dec 11 09:11:22 compute-0 sudo[71784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:11:22 compute-0 sudo[71784]: pam_unix(sudo:session): session closed for user root
Dec 11 09:11:22 compute-0 sudo[71857]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhykahxwunaanmptqcrchbbwdjfitlkh ; /usr/bin/python3'
Dec 11 09:11:22 compute-0 sudo[71857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:11:22 compute-0 sudo[71857]: pam_unix(sudo:session): session closed for user root
Dec 11 09:11:23 compute-0 sudo[71959]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xihyrrlttkujunwskqbpgbewdarjvdkc ; /usr/bin/python3'
Dec 11 09:11:23 compute-0 sudo[71959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:11:23 compute-0 sudo[71959]: pam_unix(sudo:session): session closed for user root
Dec 11 09:11:23 compute-0 sudo[72032]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-steeupwkevyarayvuyabyisiinwfemjo ; /usr/bin/python3'
Dec 11 09:11:23 compute-0 sudo[72032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:11:23 compute-0 sudo[72032]: pam_unix(sudo:session): session closed for user root
Dec 11 09:11:24 compute-0 sudo[72082]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfdmgomgojmxiolsiakkymhmrbosrrxv ; /usr/bin/python3'
Dec 11 09:11:24 compute-0 sudo[72082]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:11:24 compute-0 python3[72084]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 09:11:25 compute-0 sudo[72082]: pam_unix(sudo:session): session closed for user root
Dec 11 09:11:26 compute-0 sudo[72177]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjdapwjqxmlccrqdvbhydfckjquzumsl ; /usr/bin/python3'
Dec 11 09:11:26 compute-0 sudo[72177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:11:26 compute-0 python3[72179]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 11 09:11:27 compute-0 sudo[72177]: pam_unix(sudo:session): session closed for user root
Dec 11 09:11:28 compute-0 sudo[72204]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvpltvzavpwplyxwbpvnkcjfbwvkovps ; /usr/bin/python3'
Dec 11 09:11:28 compute-0 sudo[72204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:11:28 compute-0 python3[72206]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 11 09:11:28 compute-0 sudo[72204]: pam_unix(sudo:session): session closed for user root
Dec 11 09:11:28 compute-0 sudo[72230]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgseebvnpeagvfudzurcokslidnsquuq ; /usr/bin/python3'
Dec 11 09:11:28 compute-0 sudo[72230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:11:28 compute-0 python3[72232]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G
                                          losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:11:28 compute-0 kernel: loop: module loaded
Dec 11 09:11:28 compute-0 kernel: loop3: detected capacity change from 0 to 41943040
Dec 11 09:11:28 compute-0 sudo[72230]: pam_unix(sudo:session): session closed for user root
Dec 11 09:11:29 compute-0 sudo[72265]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gndxjwflwnigcuhgtwcncfgtbwhsoplv ; /usr/bin/python3'
Dec 11 09:11:29 compute-0 sudo[72265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:11:29 compute-0 python3[72267]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                          vgcreate ceph_vg0 /dev/loop3
                                          lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:11:29 compute-0 chronyd[58563]: Selected source 23.159.16.194 (pool.ntp.org)
Dec 11 09:11:29 compute-0 lvm[72270]: PV /dev/loop3 not used.
Dec 11 09:11:29 compute-0 lvm[72279]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 11 09:11:29 compute-0 sudo[72265]: pam_unix(sudo:session): session closed for user root
Dec 11 09:11:29 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Dec 11 09:11:29 compute-0 lvm[72281]:   1 logical volume(s) in volume group "ceph_vg0" now active
Dec 11 09:11:29 compute-0 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Dec 11 09:11:29 compute-0 sudo[72357]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwlkbkqjkfekkomlpvbiyvkzxbmhgekc ; /usr/bin/python3'
Dec 11 09:11:29 compute-0 sudo[72357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:11:30 compute-0 python3[72359]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 11 09:11:30 compute-0 sudo[72357]: pam_unix(sudo:session): session closed for user root
Dec 11 09:11:30 compute-0 sudo[72430]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdylfcuhxkqyuzovrjxlbwqqyikuatrk ; /usr/bin/python3'
Dec 11 09:11:30 compute-0 sudo[72430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:11:30 compute-0 python3[72432]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765444289.7532592-36752-250853699706875/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:11:30 compute-0 sudo[72430]: pam_unix(sudo:session): session closed for user root
Dec 11 09:11:30 compute-0 sudo[72480]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opnbyfgjdcgsbyxochsejzkpqmepyvtg ; /usr/bin/python3'
Dec 11 09:11:30 compute-0 sudo[72480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:11:31 compute-0 python3[72482]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 11 09:11:31 compute-0 systemd[1]: Reloading.
Dec 11 09:11:31 compute-0 systemd-rc-local-generator[72510]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:11:31 compute-0 systemd-sysv-generator[72515]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:11:31 compute-0 systemd[1]: Starting Ceph OSD losetup...
Dec 11 09:11:31 compute-0 bash[72523]: /dev/loop3: [64513]:4194937 (/var/lib/ceph-osd-0.img)
Dec 11 09:11:31 compute-0 systemd[1]: Finished Ceph OSD losetup.
Dec 11 09:11:31 compute-0 lvm[72524]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 11 09:11:31 compute-0 lvm[72524]: VG ceph_vg0 finished
Dec 11 09:11:31 compute-0 sudo[72480]: pam_unix(sudo:session): session closed for user root
Dec 11 09:11:33 compute-0 python3[72548]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 09:11:36 compute-0 sudo[72639]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yixoyrryuoyhpwzfzmweibdxompauzdu ; /usr/bin/python3'
Dec 11 09:11:36 compute-0 sudo[72639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:11:36 compute-0 python3[72641]: ansible-ansible.legacy.dnf Invoked with name=['centos-release-ceph-squid'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 11 09:11:39 compute-0 sudo[72639]: pam_unix(sudo:session): session closed for user root
Dec 11 09:11:39 compute-0 sudo[72696]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwtmjgsccjjpefpotossvjfylmnecbiq ; /usr/bin/python3'
Dec 11 09:11:39 compute-0 sudo[72696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:11:39 compute-0 python3[72698]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 11 09:11:43 compute-0 groupadd[72708]: group added to /etc/group: name=cephadm, GID=992
Dec 11 09:11:43 compute-0 groupadd[72708]: group added to /etc/gshadow: name=cephadm
Dec 11 09:11:43 compute-0 groupadd[72708]: new group: name=cephadm, GID=992
Dec 11 09:11:43 compute-0 useradd[72715]: new user: name=cephadm, UID=992, GID=992, home=/var/lib/cephadm, shell=/bin/bash, from=none
Dec 11 09:11:43 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 11 09:11:44 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 11 09:11:44 compute-0 sudo[72696]: pam_unix(sudo:session): session closed for user root
Dec 11 09:11:45 compute-0 sudo[72810]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qllddevagggnochvejtmebivpkshwvou ; /usr/bin/python3'
Dec 11 09:11:45 compute-0 sudo[72810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:11:45 compute-0 python3[72812]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 11 09:11:45 compute-0 sudo[72810]: pam_unix(sudo:session): session closed for user root
Dec 11 09:11:45 compute-0 sudo[72838]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxavudpnszqczkwrqfcdtueurccxdfjs ; /usr/bin/python3'
Dec 11 09:11:45 compute-0 sudo[72838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:11:45 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 11 09:11:45 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 11 09:11:45 compute-0 systemd[1]: run-r18c4b7c6697f4c8d8c39cae5fb8e3486.service: Deactivated successfully.
Dec 11 09:11:45 compute-0 python3[72840]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:11:45 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 11 09:11:45 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 11 09:11:46 compute-0 sudo[72838]: pam_unix(sudo:session): session closed for user root
Dec 11 09:11:46 compute-0 sudo[72901]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufphvwkcbdguooycaclkcubyslfsdijh ; /usr/bin/python3'
Dec 11 09:11:46 compute-0 sudo[72901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:11:46 compute-0 python3[72903]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:11:46 compute-0 sudo[72901]: pam_unix(sudo:session): session closed for user root
Dec 11 09:11:46 compute-0 sudo[72927]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfozrdwjhruftghburffvrjgjaeacvfx ; /usr/bin/python3'
Dec 11 09:11:46 compute-0 sudo[72927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:11:46 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 11 09:11:46 compute-0 python3[72929]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:11:46 compute-0 sudo[72927]: pam_unix(sudo:session): session closed for user root
Dec 11 09:11:47 compute-0 sudo[73005]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynayakfjgraqbhevbuhtqcqtrlvzatut ; /usr/bin/python3'
Dec 11 09:11:47 compute-0 sudo[73005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:11:47 compute-0 python3[73007]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 11 09:11:47 compute-0 sudo[73005]: pam_unix(sudo:session): session closed for user root
Dec 11 09:11:47 compute-0 sudo[73078]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-durwucvkcrspqvajvlbzrqnmjjzdudes ; /usr/bin/python3'
Dec 11 09:11:47 compute-0 sudo[73078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:11:48 compute-0 python3[73080]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765444307.3896773-36944-78049151471767/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=a2c84611a4e46cfce32a90c112eae0345cab6abb backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:11:48 compute-0 sudo[73078]: pam_unix(sudo:session): session closed for user root
Dec 11 09:11:48 compute-0 sudo[73180]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxxsztnfsycycpmpfbghqajmimhawqgw ; /usr/bin/python3'
Dec 11 09:11:48 compute-0 sudo[73180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:11:48 compute-0 python3[73182]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 11 09:11:48 compute-0 sudo[73180]: pam_unix(sudo:session): session closed for user root
Dec 11 09:11:49 compute-0 sudo[73253]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmpbxcetusulpjxcbgtfuuuhjgsflsgo ; /usr/bin/python3'
Dec 11 09:11:49 compute-0 sudo[73253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:11:49 compute-0 python3[73255]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765444308.5404274-36962-29628953986402/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:11:49 compute-0 sudo[73253]: pam_unix(sudo:session): session closed for user root
Dec 11 09:11:49 compute-0 sudo[73303]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbjixlltngznpsgrkgocqohqgsnggosb ; /usr/bin/python3'
Dec 11 09:11:49 compute-0 sudo[73303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:11:49 compute-0 python3[73305]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 11 09:11:49 compute-0 sudo[73303]: pam_unix(sudo:session): session closed for user root
Dec 11 09:11:49 compute-0 sudo[73331]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udpkwebkdakldaysclnlhkjctvpjhhru ; /usr/bin/python3'
Dec 11 09:11:49 compute-0 sudo[73331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:11:49 compute-0 python3[73333]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 11 09:11:49 compute-0 sudo[73331]: pam_unix(sudo:session): session closed for user root
Dec 11 09:11:50 compute-0 sudo[73359]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvwwrcjdazbbpdxwmxahlvnmvjqzqzzj ; /usr/bin/python3'
Dec 11 09:11:50 compute-0 sudo[73359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:11:50 compute-0 python3[73361]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 11 09:11:50 compute-0 sudo[73359]: pam_unix(sudo:session): session closed for user root
Dec 11 09:11:50 compute-0 sudo[73387]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzpnfoqofloutfeecsapqssowjvsdarn ; /usr/bin/python3'
Dec 11 09:11:50 compute-0 sudo[73387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:11:50 compute-0 python3[73389]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 --config /home/ceph-admin/assimilate_ceph.conf \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100
                                           _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:11:50 compute-0 sshd-session[73393]: Accepted publickey for ceph-admin from 192.168.122.100 port 34042 ssh2: RSA SHA256:VeIx2NZka5hi0niQjHCLie+FE2InrWghbFhBMbpMPGo
Dec 11 09:11:50 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Dec 11 09:11:50 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Dec 11 09:11:50 compute-0 systemd-logind[792]: New session 19 of user ceph-admin.
Dec 11 09:11:50 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Dec 11 09:11:50 compute-0 systemd[1]: Starting User Manager for UID 42477...
Dec 11 09:11:50 compute-0 systemd[73397]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 11 09:11:51 compute-0 systemd[73397]: Queued start job for default target Main User Target.
Dec 11 09:11:51 compute-0 systemd[73397]: Created slice User Application Slice.
Dec 11 09:11:51 compute-0 systemd[73397]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 11 09:11:51 compute-0 systemd[73397]: Started Daily Cleanup of User's Temporary Directories.
Dec 11 09:11:51 compute-0 systemd[73397]: Reached target Paths.
Dec 11 09:11:51 compute-0 systemd[73397]: Reached target Timers.
Dec 11 09:11:51 compute-0 systemd[73397]: Starting D-Bus User Message Bus Socket...
Dec 11 09:11:51 compute-0 systemd[73397]: Starting Create User's Volatile Files and Directories...
Dec 11 09:11:51 compute-0 systemd[73397]: Finished Create User's Volatile Files and Directories.
Dec 11 09:11:51 compute-0 systemd[73397]: Listening on D-Bus User Message Bus Socket.
Dec 11 09:11:51 compute-0 systemd[73397]: Reached target Sockets.
Dec 11 09:11:51 compute-0 systemd[73397]: Reached target Basic System.
Dec 11 09:11:51 compute-0 systemd[73397]: Reached target Main User Target.
Dec 11 09:11:51 compute-0 systemd[73397]: Startup finished in 121ms.
Dec 11 09:11:51 compute-0 systemd[1]: Started User Manager for UID 42477.
Dec 11 09:11:51 compute-0 systemd[1]: Started Session 19 of User ceph-admin.
Dec 11 09:11:51 compute-0 sshd-session[73393]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 11 09:11:51 compute-0 sudo[73413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/echo
Dec 11 09:11:51 compute-0 sudo[73413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:11:51 compute-0 sudo[73413]: pam_unix(sudo:session): session closed for user root
Dec 11 09:11:51 compute-0 sshd-session[73412]: Received disconnect from 192.168.122.100 port 34042:11: disconnected by user
Dec 11 09:11:51 compute-0 sshd-session[73412]: Disconnected from user ceph-admin 192.168.122.100 port 34042
Dec 11 09:11:51 compute-0 sshd-session[73393]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 11 09:11:51 compute-0 systemd[1]: session-19.scope: Deactivated successfully.
Dec 11 09:11:51 compute-0 systemd-logind[792]: Session 19 logged out. Waiting for processes to exit.
Dec 11 09:11:51 compute-0 systemd-logind[792]: Removed session 19.
Dec 11 09:11:51 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 11 09:11:51 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 11 09:11:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat16124736-lower\x2dmapped.mount: Deactivated successfully.
Dec 11 09:12:01 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Dec 11 09:12:01 compute-0 systemd[73397]: Activating special unit Exit the Session...
Dec 11 09:12:01 compute-0 systemd[73397]: Stopped target Main User Target.
Dec 11 09:12:01 compute-0 systemd[73397]: Stopped target Basic System.
Dec 11 09:12:01 compute-0 systemd[73397]: Stopped target Paths.
Dec 11 09:12:01 compute-0 systemd[73397]: Stopped target Sockets.
Dec 11 09:12:01 compute-0 systemd[73397]: Stopped target Timers.
Dec 11 09:12:01 compute-0 systemd[73397]: Stopped Mark boot as successful after the user session has run 2 minutes.
Dec 11 09:12:01 compute-0 systemd[73397]: Stopped Daily Cleanup of User's Temporary Directories.
Dec 11 09:12:01 compute-0 systemd[73397]: Closed D-Bus User Message Bus Socket.
Dec 11 09:12:01 compute-0 systemd[73397]: Stopped Create User's Volatile Files and Directories.
Dec 11 09:12:01 compute-0 systemd[73397]: Removed slice User Application Slice.
Dec 11 09:12:01 compute-0 systemd[73397]: Reached target Shutdown.
Dec 11 09:12:01 compute-0 systemd[73397]: Finished Exit the Session.
Dec 11 09:12:01 compute-0 systemd[73397]: Reached target Exit the Session.
Dec 11 09:12:01 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Dec 11 09:12:01 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Dec 11 09:12:01 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Dec 11 09:12:01 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Dec 11 09:12:01 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Dec 11 09:12:01 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Dec 11 09:12:01 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Dec 11 09:12:22 compute-0 podman[73491]: 2025-12-11 09:12:22.550111468 +0000 UTC m=+31.048997617 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:12:22 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 11 09:12:22 compute-0 podman[73550]: 2025-12-11 09:12:22.643301701 +0000 UTC m=+0.053330932 container create 709b4f1715476f3ce2541e515d33daa4177c2bb33af38076b1767411a50f890d (image=quay.io/ceph/ceph:v19, name=elastic_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:12:22 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Dec 11 09:12:22 compute-0 systemd[1]: Started libpod-conmon-709b4f1715476f3ce2541e515d33daa4177c2bb33af38076b1767411a50f890d.scope.
Dec 11 09:12:22 compute-0 podman[73550]: 2025-12-11 09:12:22.621222473 +0000 UTC m=+0.031251734 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:12:22 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:12:22 compute-0 podman[73550]: 2025-12-11 09:12:22.754148632 +0000 UTC m=+0.164177873 container init 709b4f1715476f3ce2541e515d33daa4177c2bb33af38076b1767411a50f890d (image=quay.io/ceph/ceph:v19, name=elastic_albattani, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 11 09:12:22 compute-0 podman[73550]: 2025-12-11 09:12:22.764071122 +0000 UTC m=+0.174100363 container start 709b4f1715476f3ce2541e515d33daa4177c2bb33af38076b1767411a50f890d (image=quay.io/ceph/ceph:v19, name=elastic_albattani, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 11 09:12:22 compute-0 podman[73550]: 2025-12-11 09:12:22.771115931 +0000 UTC m=+0.181145202 container attach 709b4f1715476f3ce2541e515d33daa4177c2bb33af38076b1767411a50f890d (image=quay.io/ceph/ceph:v19, name=elastic_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 11 09:12:22 compute-0 elastic_albattani[73566]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)
Dec 11 09:12:22 compute-0 systemd[1]: libpod-709b4f1715476f3ce2541e515d33daa4177c2bb33af38076b1767411a50f890d.scope: Deactivated successfully.
Dec 11 09:12:22 compute-0 podman[73550]: 2025-12-11 09:12:22.889589181 +0000 UTC m=+0.299618422 container died 709b4f1715476f3ce2541e515d33daa4177c2bb33af38076b1767411a50f890d (image=quay.io/ceph/ceph:v19, name=elastic_albattani, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:12:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-f797a18d7fedacaad3fa094e72724ba270140ddac968d11550e119220bf3949d-merged.mount: Deactivated successfully.
Dec 11 09:12:22 compute-0 podman[73550]: 2025-12-11 09:12:22.921589647 +0000 UTC m=+0.331618888 container remove 709b4f1715476f3ce2541e515d33daa4177c2bb33af38076b1767411a50f890d (image=quay.io/ceph/ceph:v19, name=elastic_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 11 09:12:22 compute-0 systemd[1]: libpod-conmon-709b4f1715476f3ce2541e515d33daa4177c2bb33af38076b1767411a50f890d.scope: Deactivated successfully.
Dec 11 09:12:22 compute-0 podman[73583]: 2025-12-11 09:12:22.984354792 +0000 UTC m=+0.041100131 container create 0356c030f8e680c388e4ccfbf08c1a4477e438b1c6b5bdd2f4afecb6f073cc98 (image=quay.io/ceph/ceph:v19, name=upbeat_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:12:23 compute-0 systemd[1]: Started libpod-conmon-0356c030f8e680c388e4ccfbf08c1a4477e438b1c6b5bdd2f4afecb6f073cc98.scope.
Dec 11 09:12:23 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:12:23 compute-0 podman[73583]: 2025-12-11 09:12:23.042579435 +0000 UTC m=+0.099324774 container init 0356c030f8e680c388e4ccfbf08c1a4477e438b1c6b5bdd2f4afecb6f073cc98 (image=quay.io/ceph/ceph:v19, name=upbeat_yalow, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 11 09:12:23 compute-0 podman[73583]: 2025-12-11 09:12:23.047970393 +0000 UTC m=+0.104715732 container start 0356c030f8e680c388e4ccfbf08c1a4477e438b1c6b5bdd2f4afecb6f073cc98 (image=quay.io/ceph/ceph:v19, name=upbeat_yalow, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 11 09:12:23 compute-0 podman[73583]: 2025-12-11 09:12:23.051341538 +0000 UTC m=+0.108086877 container attach 0356c030f8e680c388e4ccfbf08c1a4477e438b1c6b5bdd2f4afecb6f073cc98 (image=quay.io/ceph/ceph:v19, name=upbeat_yalow, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:12:23 compute-0 upbeat_yalow[73600]: 167 167
Dec 11 09:12:23 compute-0 systemd[1]: libpod-0356c030f8e680c388e4ccfbf08c1a4477e438b1c6b5bdd2f4afecb6f073cc98.scope: Deactivated successfully.
Dec 11 09:12:23 compute-0 podman[73583]: 2025-12-11 09:12:23.053109352 +0000 UTC m=+0.109854691 container died 0356c030f8e680c388e4ccfbf08c1a4477e438b1c6b5bdd2f4afecb6f073cc98 (image=quay.io/ceph/ceph:v19, name=upbeat_yalow, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 11 09:12:23 compute-0 podman[73583]: 2025-12-11 09:12:22.968254931 +0000 UTC m=+0.025000290 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:12:23 compute-0 podman[73583]: 2025-12-11 09:12:23.085843213 +0000 UTC m=+0.142588552 container remove 0356c030f8e680c388e4ccfbf08c1a4477e438b1c6b5bdd2f4afecb6f073cc98 (image=quay.io/ceph/ceph:v19, name=upbeat_yalow, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:12:23 compute-0 systemd[1]: libpod-conmon-0356c030f8e680c388e4ccfbf08c1a4477e438b1c6b5bdd2f4afecb6f073cc98.scope: Deactivated successfully.
Dec 11 09:12:23 compute-0 podman[73617]: 2025-12-11 09:12:23.148278756 +0000 UTC m=+0.042666199 container create 54961df9cd50ccb97f586d31a29f0a612be39ad90447dd0cf3ad2439fd0d89e7 (image=quay.io/ceph/ceph:v19, name=hopeful_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 11 09:12:23 compute-0 systemd[1]: Started libpod-conmon-54961df9cd50ccb97f586d31a29f0a612be39ad90447dd0cf3ad2439fd0d89e7.scope.
Dec 11 09:12:23 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:12:23 compute-0 podman[73617]: 2025-12-11 09:12:23.203526478 +0000 UTC m=+0.097913941 container init 54961df9cd50ccb97f586d31a29f0a612be39ad90447dd0cf3ad2439fd0d89e7 (image=quay.io/ceph/ceph:v19, name=hopeful_mayer, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec 11 09:12:23 compute-0 podman[73617]: 2025-12-11 09:12:23.209039219 +0000 UTC m=+0.103426662 container start 54961df9cd50ccb97f586d31a29f0a612be39ad90447dd0cf3ad2439fd0d89e7 (image=quay.io/ceph/ceph:v19, name=hopeful_mayer, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec 11 09:12:23 compute-0 podman[73617]: 2025-12-11 09:12:23.212765195 +0000 UTC m=+0.107152638 container attach 54961df9cd50ccb97f586d31a29f0a612be39ad90447dd0cf3ad2439fd0d89e7 (image=quay.io/ceph/ceph:v19, name=hopeful_mayer, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:12:23 compute-0 podman[73617]: 2025-12-11 09:12:23.130878505 +0000 UTC m=+0.025265978 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:12:23 compute-0 hopeful_mayer[73633]: AQD3ijppj0qmDRAADpfOvJjUDij9v5FazxqhrQ==
Dec 11 09:12:23 compute-0 systemd[1]: libpod-54961df9cd50ccb97f586d31a29f0a612be39ad90447dd0cf3ad2439fd0d89e7.scope: Deactivated successfully.
Dec 11 09:12:23 compute-0 podman[73617]: 2025-12-11 09:12:23.232044666 +0000 UTC m=+0.126432119 container died 54961df9cd50ccb97f586d31a29f0a612be39ad90447dd0cf3ad2439fd0d89e7 (image=quay.io/ceph/ceph:v19, name=hopeful_mayer, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:12:23 compute-0 podman[73617]: 2025-12-11 09:12:23.263836395 +0000 UTC m=+0.158223838 container remove 54961df9cd50ccb97f586d31a29f0a612be39ad90447dd0cf3ad2439fd0d89e7 (image=quay.io/ceph/ceph:v19, name=hopeful_mayer, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 11 09:12:23 compute-0 systemd[1]: libpod-conmon-54961df9cd50ccb97f586d31a29f0a612be39ad90447dd0cf3ad2439fd0d89e7.scope: Deactivated successfully.
Dec 11 09:12:23 compute-0 podman[73651]: 2025-12-11 09:12:23.322868124 +0000 UTC m=+0.039630595 container create 8a80312f36ce289fe14f961b39a77ccd90a16038bf275d4dcc0e11804f1fee9d (image=quay.io/ceph/ceph:v19, name=festive_rubin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Dec 11 09:12:23 compute-0 systemd[1]: Started libpod-conmon-8a80312f36ce289fe14f961b39a77ccd90a16038bf275d4dcc0e11804f1fee9d.scope.
Dec 11 09:12:23 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:12:23 compute-0 podman[73651]: 2025-12-11 09:12:23.381487599 +0000 UTC m=+0.098250090 container init 8a80312f36ce289fe14f961b39a77ccd90a16038bf275d4dcc0e11804f1fee9d (image=quay.io/ceph/ceph:v19, name=festive_rubin, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:12:23 compute-0 podman[73651]: 2025-12-11 09:12:23.387355932 +0000 UTC m=+0.104118433 container start 8a80312f36ce289fe14f961b39a77ccd90a16038bf275d4dcc0e11804f1fee9d (image=quay.io/ceph/ceph:v19, name=festive_rubin, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec 11 09:12:23 compute-0 podman[73651]: 2025-12-11 09:12:23.390828461 +0000 UTC m=+0.107590952 container attach 8a80312f36ce289fe14f961b39a77ccd90a16038bf275d4dcc0e11804f1fee9d (image=quay.io/ceph/ceph:v19, name=festive_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec 11 09:12:23 compute-0 podman[73651]: 2025-12-11 09:12:23.306520945 +0000 UTC m=+0.023283436 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:12:23 compute-0 festive_rubin[73667]: AQD3ijppjAiAGBAA1vnhtXdoH5GMcrMN3FAR7Q==
Dec 11 09:12:23 compute-0 systemd[1]: libpod-8a80312f36ce289fe14f961b39a77ccd90a16038bf275d4dcc0e11804f1fee9d.scope: Deactivated successfully.
Dec 11 09:12:23 compute-0 podman[73651]: 2025-12-11 09:12:23.41522324 +0000 UTC m=+0.131985711 container died 8a80312f36ce289fe14f961b39a77ccd90a16038bf275d4dcc0e11804f1fee9d (image=quay.io/ceph/ceph:v19, name=festive_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec 11 09:12:23 compute-0 podman[73651]: 2025-12-11 09:12:23.451624184 +0000 UTC m=+0.168386655 container remove 8a80312f36ce289fe14f961b39a77ccd90a16038bf275d4dcc0e11804f1fee9d (image=quay.io/ceph/ceph:v19, name=festive_rubin, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 11 09:12:23 compute-0 systemd[1]: libpod-conmon-8a80312f36ce289fe14f961b39a77ccd90a16038bf275d4dcc0e11804f1fee9d.scope: Deactivated successfully.
Dec 11 09:12:23 compute-0 podman[73685]: 2025-12-11 09:12:23.520287563 +0000 UTC m=+0.046444488 container create f1c2654e57848670c35099fd8344710048a16f073a2fedeea10e6862efc84240 (image=quay.io/ceph/ceph:v19, name=relaxed_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:12:23 compute-0 systemd[1]: Started libpod-conmon-f1c2654e57848670c35099fd8344710048a16f073a2fedeea10e6862efc84240.scope.
Dec 11 09:12:23 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:12:23 compute-0 podman[73685]: 2025-12-11 09:12:23.580691573 +0000 UTC m=+0.106848748 container init f1c2654e57848670c35099fd8344710048a16f073a2fedeea10e6862efc84240 (image=quay.io/ceph/ceph:v19, name=relaxed_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec 11 09:12:23 compute-0 podman[73685]: 2025-12-11 09:12:23.586459873 +0000 UTC m=+0.112616798 container start f1c2654e57848670c35099fd8344710048a16f073a2fedeea10e6862efc84240 (image=quay.io/ceph/ceph:v19, name=relaxed_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:12:23 compute-0 podman[73685]: 2025-12-11 09:12:23.590431617 +0000 UTC m=+0.116588562 container attach f1c2654e57848670c35099fd8344710048a16f073a2fedeea10e6862efc84240 (image=quay.io/ceph/ceph:v19, name=relaxed_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec 11 09:12:23 compute-0 podman[73685]: 2025-12-11 09:12:23.50061782 +0000 UTC m=+0.026774755 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:12:23 compute-0 relaxed_lederberg[73701]: AQD3ijppecMxJBAAeKQaLET6inrrHbSLWaDjjA==
Dec 11 09:12:23 compute-0 systemd[1]: libpod-f1c2654e57848670c35099fd8344710048a16f073a2fedeea10e6862efc84240.scope: Deactivated successfully.
Dec 11 09:12:23 compute-0 podman[73685]: 2025-12-11 09:12:23.611929366 +0000 UTC m=+0.138086291 container died f1c2654e57848670c35099fd8344710048a16f073a2fedeea10e6862efc84240 (image=quay.io/ceph/ceph:v19, name=relaxed_lederberg, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid)
Dec 11 09:12:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d3e9eabf3b46a92d4e64079ee5296c47d645d6778725b14e9d0d6bb31b6afa5-merged.mount: Deactivated successfully.
Dec 11 09:12:23 compute-0 podman[73685]: 2025-12-11 09:12:23.651473018 +0000 UTC m=+0.177629943 container remove f1c2654e57848670c35099fd8344710048a16f073a2fedeea10e6862efc84240 (image=quay.io/ceph/ceph:v19, name=relaxed_lederberg, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:12:23 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 11 09:12:23 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 11 09:12:23 compute-0 systemd[1]: libpod-conmon-f1c2654e57848670c35099fd8344710048a16f073a2fedeea10e6862efc84240.scope: Deactivated successfully.
Dec 11 09:12:23 compute-0 podman[73720]: 2025-12-11 09:12:23.717293187 +0000 UTC m=+0.043716242 container create ee029322aa4296568beb9851a95acf7992aa5640bcad696c543fdce5f9adf72b (image=quay.io/ceph/ceph:v19, name=gifted_wu, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 11 09:12:23 compute-0 systemd[1]: Started libpod-conmon-ee029322aa4296568beb9851a95acf7992aa5640bcad696c543fdce5f9adf72b.scope.
Dec 11 09:12:23 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:12:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fb346743a1eca10584a934e15c4840ab23312758dea81e2a2c15edccf344aba/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:23 compute-0 podman[73720]: 2025-12-11 09:12:23.778741691 +0000 UTC m=+0.105164746 container init ee029322aa4296568beb9851a95acf7992aa5640bcad696c543fdce5f9adf72b (image=quay.io/ceph/ceph:v19, name=gifted_wu, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 11 09:12:23 compute-0 podman[73720]: 2025-12-11 09:12:23.785398609 +0000 UTC m=+0.111821664 container start ee029322aa4296568beb9851a95acf7992aa5640bcad696c543fdce5f9adf72b (image=quay.io/ceph/ceph:v19, name=gifted_wu, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 11 09:12:23 compute-0 podman[73720]: 2025-12-11 09:12:23.789238039 +0000 UTC m=+0.115661114 container attach ee029322aa4296568beb9851a95acf7992aa5640bcad696c543fdce5f9adf72b (image=quay.io/ceph/ceph:v19, name=gifted_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:12:23 compute-0 podman[73720]: 2025-12-11 09:12:23.696438418 +0000 UTC m=+0.022861503 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:12:23 compute-0 gifted_wu[73737]: /usr/bin/monmaptool: monmap file /tmp/monmap
Dec 11 09:12:23 compute-0 gifted_wu[73737]: setting min_mon_release = quincy
Dec 11 09:12:23 compute-0 gifted_wu[73737]: /usr/bin/monmaptool: set fsid to 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:12:23 compute-0 gifted_wu[73737]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Dec 11 09:12:23 compute-0 systemd[1]: libpod-ee029322aa4296568beb9851a95acf7992aa5640bcad696c543fdce5f9adf72b.scope: Deactivated successfully.
Dec 11 09:12:23 compute-0 podman[73720]: 2025-12-11 09:12:23.818675615 +0000 UTC m=+0.145098670 container died ee029322aa4296568beb9851a95acf7992aa5640bcad696c543fdce5f9adf72b (image=quay.io/ceph/ceph:v19, name=gifted_wu, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0)
Dec 11 09:12:23 compute-0 podman[73720]: 2025-12-11 09:12:23.859373522 +0000 UTC m=+0.185796577 container remove ee029322aa4296568beb9851a95acf7992aa5640bcad696c543fdce5f9adf72b (image=quay.io/ceph/ceph:v19, name=gifted_wu, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:12:23 compute-0 systemd[1]: libpod-conmon-ee029322aa4296568beb9851a95acf7992aa5640bcad696c543fdce5f9adf72b.scope: Deactivated successfully.
Dec 11 09:12:23 compute-0 podman[73756]: 2025-12-11 09:12:23.932463548 +0000 UTC m=+0.047445078 container create 468d9b4d7ea6a72544bfc1d5fdb0197e5bd825a1885da25705c0b050c31c0206 (image=quay.io/ceph/ceph:v19, name=happy_rhodes, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:12:23 compute-0 systemd[1]: Started libpod-conmon-468d9b4d7ea6a72544bfc1d5fdb0197e5bd825a1885da25705c0b050c31c0206.scope.
Dec 11 09:12:23 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:12:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b168d6a91722c0874235e75440a3afc4bcdb27f8c8fdeb114e76ab13ec6d6bd1/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b168d6a91722c0874235e75440a3afc4bcdb27f8c8fdeb114e76ab13ec6d6bd1/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b168d6a91722c0874235e75440a3afc4bcdb27f8c8fdeb114e76ab13ec6d6bd1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b168d6a91722c0874235e75440a3afc4bcdb27f8c8fdeb114e76ab13ec6d6bd1/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:23 compute-0 podman[73756]: 2025-12-11 09:12:23.997595327 +0000 UTC m=+0.112576877 container init 468d9b4d7ea6a72544bfc1d5fdb0197e5bd825a1885da25705c0b050c31c0206 (image=quay.io/ceph/ceph:v19, name=happy_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:12:24 compute-0 podman[73756]: 2025-12-11 09:12:24.004240824 +0000 UTC m=+0.119222354 container start 468d9b4d7ea6a72544bfc1d5fdb0197e5bd825a1885da25705c0b050c31c0206 (image=quay.io/ceph/ceph:v19, name=happy_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:12:24 compute-0 podman[73756]: 2025-12-11 09:12:23.912830627 +0000 UTC m=+0.027812187 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:12:24 compute-0 podman[73756]: 2025-12-11 09:12:24.031793702 +0000 UTC m=+0.146775252 container attach 468d9b4d7ea6a72544bfc1d5fdb0197e5bd825a1885da25705c0b050c31c0206 (image=quay.io/ceph/ceph:v19, name=happy_rhodes, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 11 09:12:24 compute-0 systemd[1]: libpod-468d9b4d7ea6a72544bfc1d5fdb0197e5bd825a1885da25705c0b050c31c0206.scope: Deactivated successfully.
Dec 11 09:12:24 compute-0 conmon[73772]: conmon 468d9b4d7ea6a72544bf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-468d9b4d7ea6a72544bfc1d5fdb0197e5bd825a1885da25705c0b050c31c0206.scope/container/memory.events
Dec 11 09:12:24 compute-0 podman[73756]: 2025-12-11 09:12:24.128806333 +0000 UTC m=+0.243787893 container died 468d9b4d7ea6a72544bfc1d5fdb0197e5bd825a1885da25705c0b050c31c0206 (image=quay.io/ceph/ceph:v19, name=happy_rhodes, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:12:24 compute-0 podman[73756]: 2025-12-11 09:12:24.168018534 +0000 UTC m=+0.283000064 container remove 468d9b4d7ea6a72544bfc1d5fdb0197e5bd825a1885da25705c0b050c31c0206 (image=quay.io/ceph/ceph:v19, name=happy_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 11 09:12:24 compute-0 systemd[1]: libpod-conmon-468d9b4d7ea6a72544bfc1d5fdb0197e5bd825a1885da25705c0b050c31c0206.scope: Deactivated successfully.
Dec 11 09:12:24 compute-0 systemd[1]: Reloading.
Dec 11 09:12:24 compute-0 systemd-rc-local-generator[73840]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:12:24 compute-0 systemd-sysv-generator[73843]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:12:24 compute-0 systemd[1]: Reloading.
Dec 11 09:12:24 compute-0 systemd-sysv-generator[73878]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:12:24 compute-0 systemd-rc-local-generator[73875]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:12:24 compute-0 systemd[1]: Reached target All Ceph clusters and services.
Dec 11 09:12:24 compute-0 systemd[1]: Reloading.
Dec 11 09:12:24 compute-0 systemd-sysv-generator[73915]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:12:24 compute-0 systemd-rc-local-generator[73912]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:12:25 compute-0 systemd[1]: Reached target Ceph cluster 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060.
Dec 11 09:12:25 compute-0 systemd[1]: Reloading.
Dec 11 09:12:25 compute-0 systemd-rc-local-generator[73951]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:12:25 compute-0 systemd-sysv-generator[73954]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:12:25 compute-0 systemd[1]: Reloading.
Dec 11 09:12:25 compute-0 systemd-rc-local-generator[73992]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:12:25 compute-0 systemd-sysv-generator[73996]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:12:25 compute-0 systemd[1]: Created slice Slice /system/ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060.
Dec 11 09:12:25 compute-0 systemd[1]: Reached target System Time Set.
Dec 11 09:12:25 compute-0 systemd[1]: Reached target System Time Synchronized.
Dec 11 09:12:25 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060...
Dec 11 09:12:25 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 11 09:12:25 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 11 09:12:25 compute-0 podman[74048]: 2025-12-11 09:12:25.928066118 +0000 UTC m=+0.039627835 container create ad109b62b3cb477ead9c23ad98f79dee561c267db5880c9842f5eb391537bf24 (image=quay.io/ceph/ceph:v19, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 11 09:12:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18befd82102ab7b45cfdb8261f9509bb898f8014e3d3eebff9612c988186e491/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18befd82102ab7b45cfdb8261f9509bb898f8014e3d3eebff9612c988186e491/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18befd82102ab7b45cfdb8261f9509bb898f8014e3d3eebff9612c988186e491/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18befd82102ab7b45cfdb8261f9509bb898f8014e3d3eebff9612c988186e491/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:25 compute-0 podman[74048]: 2025-12-11 09:12:25.993488785 +0000 UTC m=+0.105050522 container init ad109b62b3cb477ead9c23ad98f79dee561c267db5880c9842f5eb391537bf24 (image=quay.io/ceph/ceph:v19, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 11 09:12:25 compute-0 podman[74048]: 2025-12-11 09:12:25.999145271 +0000 UTC m=+0.110706988 container start ad109b62b3cb477ead9c23ad98f79dee561c267db5880c9842f5eb391537bf24 (image=quay.io/ceph/ceph:v19, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mon-compute-0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 11 09:12:26 compute-0 podman[74048]: 2025-12-11 09:12:25.911735618 +0000 UTC m=+0.023297355 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:12:26 compute-0 bash[74048]: ad109b62b3cb477ead9c23ad98f79dee561c267db5880c9842f5eb391537bf24
Dec 11 09:12:26 compute-0 systemd[1]: Started Ceph mon.compute-0 for 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060.
Dec 11 09:12:26 compute-0 ceph-mon[74068]: set uid:gid to 167:167 (ceph:ceph)
Dec 11 09:12:26 compute-0 ceph-mon[74068]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Dec 11 09:12:26 compute-0 ceph-mon[74068]: pidfile_write: ignore empty --pid-file
Dec 11 09:12:26 compute-0 ceph-mon[74068]: load: jerasure load: lrc 
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb: RocksDB version: 7.9.2
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb: Git sha 0
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb: Compile date 2025-07-17 03:12:14
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb: DB SUMMARY
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb: DB Session ID:  PIAQ6ITHKE5NFEW28T9E
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb: CURRENT file:  CURRENT
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb: IDENTITY file:  IDENTITY
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                         Options.error_if_exists: 0
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                       Options.create_if_missing: 0
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                         Options.paranoid_checks: 1
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                                     Options.env: 0x5584595b1c20
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                                      Options.fs: PosixFileSystem
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                                Options.info_log: 0x55845aa19940
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                Options.max_file_opening_threads: 16
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                              Options.statistics: (nil)
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                               Options.use_fsync: 0
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                       Options.max_log_file_size: 0
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                         Options.allow_fallocate: 1
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                        Options.use_direct_reads: 0
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:          Options.create_missing_column_families: 0
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                              Options.db_log_dir: 
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                                 Options.wal_dir: 
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                   Options.advise_random_on_open: 1
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                    Options.write_buffer_manager: 0x55845aa1d900
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                            Options.rate_limiter: (nil)
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                  Options.unordered_write: 0
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                               Options.row_cache: None
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                              Options.wal_filter: None
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:             Options.allow_ingest_behind: 0
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:             Options.two_write_queues: 0
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:             Options.manual_wal_flush: 0
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:             Options.wal_compression: 0
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:             Options.atomic_flush: 0
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                 Options.log_readahead_size: 0
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:             Options.allow_data_in_errors: 0
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:             Options.db_host_id: __hostname__
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:             Options.max_background_jobs: 2
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:             Options.max_background_compactions: -1
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:             Options.max_subcompactions: 1
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:             Options.max_total_wal_size: 0
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                          Options.max_open_files: -1
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                          Options.bytes_per_sync: 0
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:       Options.compaction_readahead_size: 0
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                  Options.max_background_flushes: -1
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb: Compression algorithms supported:
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:         kZSTD supported: 0
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:         kXpressCompression supported: 0
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:         kBZip2Compression supported: 0
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:         kLZ4Compression supported: 1
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:         kZlibCompression supported: 1
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:         kLZ4HCCompression supported: 1
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:         kSnappyCompression supported: 1
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:           Options.merge_operator: 
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:        Options.compaction_filter: None
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:        Options.compaction_filter_factory: None
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:  Options.sst_partitioner_factory: None
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55845aa195e0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55845aa3c9b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:        Options.write_buffer_size: 33554432
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:  Options.max_write_buffer_number: 2
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:          Options.compression: NoCompression
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:       Options.prefix_extractor: nullptr
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:             Options.num_levels: 7
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                  Options.compression_opts.level: 32767
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:               Options.compression_opts.strategy: 0
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                  Options.compression_opts.enabled: false
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                        Options.arena_block_size: 1048576
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                Options.disable_auto_compactions: 0
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                   Options.inplace_update_support: 0
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                           Options.bloom_locality: 0
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                    Options.max_successive_merges: 0
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                Options.paranoid_file_checks: 0
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                Options.force_consistency_checks: 1
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                Options.report_bg_io_stats: 0
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                               Options.ttl: 2592000
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                       Options.enable_blob_files: false
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                           Options.min_blob_size: 0
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                          Options.blob_file_size: 268435456
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb:                Options.blob_file_starting_level: 0
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: e1ac8294-d02a-459a-8058-7f05c4f78e7d
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765444346045799, "job": 1, "event": "recovery_started", "wal_files": [4]}
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765444346048452, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765444346, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1ac8294-d02a-459a-8058-7f05c4f78e7d", "db_session_id": "PIAQ6ITHKE5NFEW28T9E", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765444346048613, "job": 1, "event": "recovery_finished"}
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55845aa3ee00
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb: DB pointer 0x55845aa4e000
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 11 09:12:26 compute-0 ceph-mon[74068]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.13 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.13 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55845aa3c9b0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 7.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 11 09:12:26 compute-0 ceph-mon[74068]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:12:26 compute-0 ceph-mon[74068]: mon.compute-0@-1(???) e0 preinit fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:12:26 compute-0 ceph-mon[74068]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Dec 11 09:12:26 compute-0 ceph-mon[74068]: mon.compute-0@0(probing) e0 win_standalone_election
Dec 11 09:12:26 compute-0 ceph-mon[74068]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Dec 11 09:12:26 compute-0 ceph-mon[74068]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 11 09:12:26 compute-0 ceph-mon[74068]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 11 09:12:26 compute-0 ceph-mon[74068]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Dec 11 09:12:26 compute-0 ceph-mon[74068]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Dec 11 09:12:26 compute-0 ceph-mon[74068]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Dec 11 09:12:26 compute-0 ceph-mon[74068]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Dec 11 09:12:26 compute-0 ceph-mon[74068]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 11 09:12:26 compute-0 ceph-mon[74068]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Dec 11 09:12:26 compute-0 podman[74069]: 2025-12-11 09:12:26.156573984 +0000 UTC m=+0.109351347 container create fbc6f48ffab2f2df6ac68cbd92d8d612dd836f6350467cd0866fa21853f21363 (image=quay.io/ceph/ceph:v19, name=gallant_lamport, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec 11 09:12:26 compute-0 ceph-mon[74068]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Dec 11 09:12:26 compute-0 podman[74069]: 2025-12-11 09:12:26.073783486 +0000 UTC m=+0.026560869 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:12:26 compute-0 ceph-mon[74068]: mon.compute-0@0(probing) e1 win_standalone_election
Dec 11 09:12:26 compute-0 ceph-mon[74068]: paxos.0).electionLogic(2) init, last seen epoch 2
Dec 11 09:12:26 compute-0 ceph-mon[74068]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 11 09:12:26 compute-0 ceph-mon[74068]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 11 09:12:26 compute-0 ceph-mon[74068]: log_channel(cluster) log [DBG] : monmap epoch 1
Dec 11 09:12:26 compute-0 ceph-mon[74068]: log_channel(cluster) log [DBG] : fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:12:26 compute-0 ceph-mon[74068]: log_channel(cluster) log [DBG] : last_changed 2025-12-11T09:12:23.814502+0000
Dec 11 09:12:26 compute-0 ceph-mon[74068]: log_channel(cluster) log [DBG] : created 2025-12-11T09:12:23.814502+0000
Dec 11 09:12:26 compute-0 ceph-mon[74068]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Dec 11 09:12:26 compute-0 ceph-mon[74068]: log_channel(cluster) log [DBG] : election_strategy: 1
Dec 11 09:12:26 compute-0 ceph-mon[74068]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec 11 09:12:26 compute-0 ceph-mon[74068]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 11 09:12:26 compute-0 ceph-mon[74068]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v19,cpu=AMD EPYC-Rome Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Dec 5 11:18:23 UTC 2025,kernel_version=5.14.0-648.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864300,os=Linux}
Dec 11 09:12:26 compute-0 ceph-mon[74068]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Dec 11 09:12:26 compute-0 ceph-mon[74068]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Dec 11 09:12:26 compute-0 ceph-mon[74068]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Dec 11 09:12:26 compute-0 ceph-mon[74068]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Dec 11 09:12:26 compute-0 ceph-mon[74068]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 11 09:12:26 compute-0 ceph-mon[74068]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout}
Dec 11 09:12:26 compute-0 ceph-mon[74068]: mon.compute-0@0(leader).mds e1 new map
Dec 11 09:12:26 compute-0 ceph-mon[74068]: mon.compute-0@0(leader).mds e1 print_map
                                           e1
                                           btime 2025-12-11T09:12:26:191373+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Dec 11 09:12:26 compute-0 ceph-mon[74068]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Dec 11 09:12:26 compute-0 ceph-mon[74068]: log_channel(cluster) log [DBG] : fsmap 
Dec 11 09:12:26 compute-0 systemd[1]: Started libpod-conmon-fbc6f48ffab2f2df6ac68cbd92d8d612dd836f6350467cd0866fa21853f21363.scope.
Dec 11 09:12:26 compute-0 ceph-mon[74068]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Dec 11 09:12:26 compute-0 ceph-mon[74068]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Dec 11 09:12:26 compute-0 ceph-mon[74068]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Dec 11 09:12:26 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:12:26 compute-0 ceph-mon[74068]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Dec 11 09:12:26 compute-0 ceph-mon[74068]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 11 09:12:26 compute-0 ceph-mon[74068]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 11 09:12:26 compute-0 ceph-mon[74068]: mkfs 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:12:26 compute-0 ceph-mon[74068]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 11 09:12:26 compute-0 ceph-mon[74068]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Dec 11 09:12:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ddac0b6aa6c81da5f978072c83fe31d829f2017af349dae839236177dd1fffe/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ddac0b6aa6c81da5f978072c83fe31d829f2017af349dae839236177dd1fffe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ddac0b6aa6c81da5f978072c83fe31d829f2017af349dae839236177dd1fffe/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:26 compute-0 ceph-mon[74068]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Dec 11 09:12:26 compute-0 ceph-mon[74068]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Dec 11 09:12:26 compute-0 ceph-mon[74068]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 11 09:12:26 compute-0 podman[74069]: 2025-12-11 09:12:26.267171318 +0000 UTC m=+0.219948701 container init fbc6f48ffab2f2df6ac68cbd92d8d612dd836f6350467cd0866fa21853f21363 (image=quay.io/ceph/ceph:v19, name=gallant_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:12:26 compute-0 podman[74069]: 2025-12-11 09:12:26.275820497 +0000 UTC m=+0.228597860 container start fbc6f48ffab2f2df6ac68cbd92d8d612dd836f6350467cd0866fa21853f21363 (image=quay.io/ceph/ceph:v19, name=gallant_lamport, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:12:26 compute-0 podman[74069]: 2025-12-11 09:12:26.280032859 +0000 UTC m=+0.232810242 container attach fbc6f48ffab2f2df6ac68cbd92d8d612dd836f6350467cd0866fa21853f21363 (image=quay.io/ceph/ceph:v19, name=gallant_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:12:26 compute-0 ceph-mon[74068]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Dec 11 09:12:26 compute-0 ceph-mon[74068]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2284342455' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 11 09:12:26 compute-0 gallant_lamport[74124]:   cluster:
Dec 11 09:12:26 compute-0 gallant_lamport[74124]:     id:     31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:12:26 compute-0 gallant_lamport[74124]:     health: HEALTH_OK
Dec 11 09:12:26 compute-0 gallant_lamport[74124]:  
Dec 11 09:12:26 compute-0 gallant_lamport[74124]:   services:
Dec 11 09:12:26 compute-0 gallant_lamport[74124]:     mon: 1 daemons, quorum compute-0 (age 0.29308s)
Dec 11 09:12:26 compute-0 gallant_lamport[74124]:     mgr: no daemons active
Dec 11 09:12:26 compute-0 gallant_lamport[74124]:     osd: 0 osds: 0 up, 0 in
Dec 11 09:12:26 compute-0 gallant_lamport[74124]:  
Dec 11 09:12:26 compute-0 gallant_lamport[74124]:   data:
Dec 11 09:12:26 compute-0 gallant_lamport[74124]:     pools:   0 pools, 0 pgs
Dec 11 09:12:26 compute-0 gallant_lamport[74124]:     objects: 0 objects, 0 B
Dec 11 09:12:26 compute-0 gallant_lamport[74124]:     usage:   0 B used, 0 B / 0 B avail
Dec 11 09:12:26 compute-0 gallant_lamport[74124]:     pgs:     
Dec 11 09:12:26 compute-0 gallant_lamport[74124]:  
Dec 11 09:12:26 compute-0 systemd[1]: libpod-fbc6f48ffab2f2df6ac68cbd92d8d612dd836f6350467cd0866fa21853f21363.scope: Deactivated successfully.
Dec 11 09:12:26 compute-0 podman[74069]: 2025-12-11 09:12:26.500896497 +0000 UTC m=+0.453673870 container died fbc6f48ffab2f2df6ac68cbd92d8d612dd836f6350467cd0866fa21853f21363 (image=quay.io/ceph/ceph:v19, name=gallant_lamport, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 11 09:12:26 compute-0 podman[74069]: 2025-12-11 09:12:26.540613964 +0000 UTC m=+0.493391327 container remove fbc6f48ffab2f2df6ac68cbd92d8d612dd836f6350467cd0866fa21853f21363 (image=quay.io/ceph/ceph:v19, name=gallant_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec 11 09:12:26 compute-0 systemd[1]: libpod-conmon-fbc6f48ffab2f2df6ac68cbd92d8d612dd836f6350467cd0866fa21853f21363.scope: Deactivated successfully.
Dec 11 09:12:26 compute-0 podman[74160]: 2025-12-11 09:12:26.626267671 +0000 UTC m=+0.061763284 container create 0fb707fe7409c7ef52f62c17a32045a3c5a688395cd5bc0224c9cd18fe1afcd5 (image=quay.io/ceph/ceph:v19, name=infallible_allen, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec 11 09:12:26 compute-0 systemd[1]: Started libpod-conmon-0fb707fe7409c7ef52f62c17a32045a3c5a688395cd5bc0224c9cd18fe1afcd5.scope.
Dec 11 09:12:26 compute-0 podman[74160]: 2025-12-11 09:12:26.590187298 +0000 UTC m=+0.025682941 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:12:26 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:12:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ae002b51913e7c0eecebfd53a8e753b7f755309d545191b5b131fb724036902/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ae002b51913e7c0eecebfd53a8e753b7f755309d545191b5b131fb724036902/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ae002b51913e7c0eecebfd53a8e753b7f755309d545191b5b131fb724036902/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ae002b51913e7c0eecebfd53a8e753b7f755309d545191b5b131fb724036902/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:26 compute-0 podman[74160]: 2025-12-11 09:12:26.741058956 +0000 UTC m=+0.176554599 container init 0fb707fe7409c7ef52f62c17a32045a3c5a688395cd5bc0224c9cd18fe1afcd5 (image=quay.io/ceph/ceph:v19, name=infallible_allen, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:12:26 compute-0 podman[74160]: 2025-12-11 09:12:26.748250581 +0000 UTC m=+0.183746194 container start 0fb707fe7409c7ef52f62c17a32045a3c5a688395cd5bc0224c9cd18fe1afcd5 (image=quay.io/ceph/ceph:v19, name=infallible_allen, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec 11 09:12:26 compute-0 podman[74160]: 2025-12-11 09:12:26.816737213 +0000 UTC m=+0.252232846 container attach 0fb707fe7409c7ef52f62c17a32045a3c5a688395cd5bc0224c9cd18fe1afcd5 (image=quay.io/ceph/ceph:v19, name=infallible_allen, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec 11 09:12:26 compute-0 ceph-mon[74068]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Dec 11 09:12:26 compute-0 ceph-mon[74068]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1142513337' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec 11 09:12:26 compute-0 ceph-mon[74068]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1142513337' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec 11 09:12:26 compute-0 infallible_allen[74177]: 
Dec 11 09:12:26 compute-0 infallible_allen[74177]: [global]
Dec 11 09:12:26 compute-0 infallible_allen[74177]:         fsid = 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:12:26 compute-0 infallible_allen[74177]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Dec 11 09:12:26 compute-0 systemd[1]: libpod-0fb707fe7409c7ef52f62c17a32045a3c5a688395cd5bc0224c9cd18fe1afcd5.scope: Deactivated successfully.
Dec 11 09:12:27 compute-0 podman[74203]: 2025-12-11 09:12:27.030876012 +0000 UTC m=+0.026462405 container died 0fb707fe7409c7ef52f62c17a32045a3c5a688395cd5bc0224c9cd18fe1afcd5 (image=quay.io/ceph/ceph:v19, name=infallible_allen, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 11 09:12:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ae002b51913e7c0eecebfd53a8e753b7f755309d545191b5b131fb724036902-merged.mount: Deactivated successfully.
Dec 11 09:12:27 compute-0 podman[74203]: 2025-12-11 09:12:27.062536268 +0000 UTC m=+0.058122641 container remove 0fb707fe7409c7ef52f62c17a32045a3c5a688395cd5bc0224c9cd18fe1afcd5 (image=quay.io/ceph/ceph:v19, name=infallible_allen, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:12:27 compute-0 systemd[1]: libpod-conmon-0fb707fe7409c7ef52f62c17a32045a3c5a688395cd5bc0224c9cd18fe1afcd5.scope: Deactivated successfully.
Dec 11 09:12:27 compute-0 podman[74218]: 2025-12-11 09:12:27.10786803 +0000 UTC m=+0.022073429 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:12:27 compute-0 podman[74218]: 2025-12-11 09:12:27.398080997 +0000 UTC m=+0.312286386 container create dc37c284cb724f7dcc7ce7d14b2a08f6449e55d4cca15abab377258511263f6a (image=quay.io/ceph/ceph:v19, name=eloquent_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 11 09:12:27 compute-0 ceph-mon[74068]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 11 09:12:27 compute-0 ceph-mon[74068]: monmap epoch 1
Dec 11 09:12:27 compute-0 ceph-mon[74068]: fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:12:27 compute-0 ceph-mon[74068]: last_changed 2025-12-11T09:12:23.814502+0000
Dec 11 09:12:27 compute-0 ceph-mon[74068]: created 2025-12-11T09:12:23.814502+0000
Dec 11 09:12:27 compute-0 ceph-mon[74068]: min_mon_release 19 (squid)
Dec 11 09:12:27 compute-0 ceph-mon[74068]: election_strategy: 1
Dec 11 09:12:27 compute-0 ceph-mon[74068]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec 11 09:12:27 compute-0 ceph-mon[74068]: fsmap 
Dec 11 09:12:27 compute-0 ceph-mon[74068]: osdmap e1: 0 total, 0 up, 0 in
Dec 11 09:12:27 compute-0 ceph-mon[74068]: mgrmap e1: no daemons active
Dec 11 09:12:27 compute-0 ceph-mon[74068]: from='client.? 192.168.122.100:0/2284342455' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 11 09:12:27 compute-0 ceph-mon[74068]: from='client.? 192.168.122.100:0/1142513337' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec 11 09:12:27 compute-0 ceph-mon[74068]: from='client.? 192.168.122.100:0/1142513337' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec 11 09:12:27 compute-0 systemd[1]: Started libpod-conmon-dc37c284cb724f7dcc7ce7d14b2a08f6449e55d4cca15abab377258511263f6a.scope.
Dec 11 09:12:27 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:12:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2417a736688909318a0de63faa52a9afcbbe2938f91fc5eec342d5c598a2a6c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2417a736688909318a0de63faa52a9afcbbe2938f91fc5eec342d5c598a2a6c6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2417a736688909318a0de63faa52a9afcbbe2938f91fc5eec342d5c598a2a6c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2417a736688909318a0de63faa52a9afcbbe2938f91fc5eec342d5c598a2a6c6/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:27 compute-0 podman[74218]: 2025-12-11 09:12:27.477924604 +0000 UTC m=+0.392130003 container init dc37c284cb724f7dcc7ce7d14b2a08f6449e55d4cca15abab377258511263f6a (image=quay.io/ceph/ceph:v19, name=eloquent_maxwell, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:12:27 compute-0 podman[74218]: 2025-12-11 09:12:27.484424407 +0000 UTC m=+0.398629786 container start dc37c284cb724f7dcc7ce7d14b2a08f6449e55d4cca15abab377258511263f6a (image=quay.io/ceph/ceph:v19, name=eloquent_maxwell, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Dec 11 09:12:27 compute-0 podman[74218]: 2025-12-11 09:12:27.488452502 +0000 UTC m=+0.402657991 container attach dc37c284cb724f7dcc7ce7d14b2a08f6449e55d4cca15abab377258511263f6a (image=quay.io/ceph/ceph:v19, name=eloquent_maxwell, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 11 09:12:27 compute-0 ceph-mon[74068]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 11 09:12:27 compute-0 ceph-mon[74068]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/770486838' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:12:27 compute-0 systemd[1]: libpod-dc37c284cb724f7dcc7ce7d14b2a08f6449e55d4cca15abab377258511263f6a.scope: Deactivated successfully.
Dec 11 09:12:27 compute-0 podman[74218]: 2025-12-11 09:12:27.727565149 +0000 UTC m=+0.641770528 container died dc37c284cb724f7dcc7ce7d14b2a08f6449e55d4cca15abab377258511263f6a (image=quay.io/ceph/ceph:v19, name=eloquent_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec 11 09:12:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-2417a736688909318a0de63faa52a9afcbbe2938f91fc5eec342d5c598a2a6c6-merged.mount: Deactivated successfully.
Dec 11 09:12:27 compute-0 podman[74218]: 2025-12-11 09:12:27.792401979 +0000 UTC m=+0.706607358 container remove dc37c284cb724f7dcc7ce7d14b2a08f6449e55d4cca15abab377258511263f6a (image=quay.io/ceph/ceph:v19, name=eloquent_maxwell, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 11 09:12:27 compute-0 systemd[1]: libpod-conmon-dc37c284cb724f7dcc7ce7d14b2a08f6449e55d4cca15abab377258511263f6a.scope: Deactivated successfully.
Dec 11 09:12:27 compute-0 systemd[1]: Stopping Ceph mon.compute-0 for 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060...
Dec 11 09:12:27 compute-0 ceph-mon[74068]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Dec 11 09:12:27 compute-0 ceph-mon[74068]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Dec 11 09:12:27 compute-0 ceph-mon[74068]: mon.compute-0@0(leader) e1 shutdown
Dec 11 09:12:27 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mon-compute-0[74064]: 2025-12-11T09:12:27.984+0000 7f287535f640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Dec 11 09:12:27 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mon-compute-0[74064]: 2025-12-11T09:12:27.984+0000 7f287535f640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Dec 11 09:12:27 compute-0 ceph-mon[74068]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec 11 09:12:27 compute-0 ceph-mon[74068]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec 11 09:12:28 compute-0 podman[74303]: 2025-12-11 09:12:28.128503876 +0000 UTC m=+0.183704263 container died ad109b62b3cb477ead9c23ad98f79dee561c267db5880c9842f5eb391537bf24 (image=quay.io/ceph/ceph:v19, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 11 09:12:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-18befd82102ab7b45cfdb8261f9509bb898f8014e3d3eebff9612c988186e491-merged.mount: Deactivated successfully.
Dec 11 09:12:28 compute-0 podman[74303]: 2025-12-11 09:12:28.181867798 +0000 UTC m=+0.237068185 container remove ad109b62b3cb477ead9c23ad98f79dee561c267db5880c9842f5eb391537bf24 (image=quay.io/ceph/ceph:v19, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:12:28 compute-0 bash[74303]: ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mon-compute-0
Dec 11 09:12:28 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 11 09:12:28 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 11 09:12:28 compute-0 systemd[1]: ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060@mon.compute-0.service: Deactivated successfully.
Dec 11 09:12:28 compute-0 systemd[1]: Stopped Ceph mon.compute-0 for 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060.
Dec 11 09:12:28 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060...
Dec 11 09:12:28 compute-0 podman[74407]: 2025-12-11 09:12:28.483733818 +0000 UTC m=+0.023442361 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:12:28 compute-0 podman[74407]: 2025-12-11 09:12:28.58784043 +0000 UTC m=+0.127548943 container create 9c08b5b1828392d2cc014af7ec9c415c9c915d4b6dd9798da9403e6929851bd4 (image=quay.io/ceph/ceph:v19, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mon-compute-0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:12:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/799c44f141f3ceff6f067a9e7552b909e6ced56c4edef29fad2ed84875017c29/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/799c44f141f3ceff6f067a9e7552b909e6ced56c4edef29fad2ed84875017c29/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/799c44f141f3ceff6f067a9e7552b909e6ced56c4edef29fad2ed84875017c29/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/799c44f141f3ceff6f067a9e7552b909e6ced56c4edef29fad2ed84875017c29/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:28 compute-0 podman[74407]: 2025-12-11 09:12:28.64403778 +0000 UTC m=+0.183746323 container init 9c08b5b1828392d2cc014af7ec9c415c9c915d4b6dd9798da9403e6929851bd4 (image=quay.io/ceph/ceph:v19, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mon-compute-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec 11 09:12:28 compute-0 podman[74407]: 2025-12-11 09:12:28.651618467 +0000 UTC m=+0.191326990 container start 9c08b5b1828392d2cc014af7ec9c415c9c915d4b6dd9798da9403e6929851bd4 (image=quay.io/ceph/ceph:v19, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:12:28 compute-0 bash[74407]: 9c08b5b1828392d2cc014af7ec9c415c9c915d4b6dd9798da9403e6929851bd4
Dec 11 09:12:28 compute-0 systemd[1]: Started Ceph mon.compute-0 for 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060.
Dec 11 09:12:28 compute-0 ceph-mon[74426]: set uid:gid to 167:167 (ceph:ceph)
Dec 11 09:12:28 compute-0 ceph-mon[74426]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Dec 11 09:12:28 compute-0 ceph-mon[74426]: pidfile_write: ignore empty --pid-file
Dec 11 09:12:28 compute-0 ceph-mon[74426]: load: jerasure load: lrc 
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb: RocksDB version: 7.9.2
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb: Git sha 0
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb: Compile date 2025-07-17 03:12:14
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb: DB SUMMARY
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb: DB Session ID:  U2WUY0PZPH8WS8N5I572
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb: CURRENT file:  CURRENT
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb: IDENTITY file:  IDENTITY
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 58741 ; 
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                         Options.error_if_exists: 0
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                       Options.create_if_missing: 0
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                         Options.paranoid_checks: 1
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                                     Options.env: 0x5578bf861c20
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                                      Options.fs: PosixFileSystem
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                                Options.info_log: 0x5578c1082e20
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                Options.max_file_opening_threads: 16
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                              Options.statistics: (nil)
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                               Options.use_fsync: 0
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                       Options.max_log_file_size: 0
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                         Options.allow_fallocate: 1
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                        Options.use_direct_reads: 0
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:          Options.create_missing_column_families: 0
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                              Options.db_log_dir: 
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                                 Options.wal_dir: 
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                   Options.advise_random_on_open: 1
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                    Options.write_buffer_manager: 0x5578c1087900
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                            Options.rate_limiter: (nil)
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                  Options.unordered_write: 0
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                               Options.row_cache: None
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                              Options.wal_filter: None
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:             Options.allow_ingest_behind: 0
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:             Options.two_write_queues: 0
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:             Options.manual_wal_flush: 0
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:             Options.wal_compression: 0
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:             Options.atomic_flush: 0
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                 Options.log_readahead_size: 0
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:             Options.allow_data_in_errors: 0
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:             Options.db_host_id: __hostname__
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:             Options.max_background_jobs: 2
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:             Options.max_background_compactions: -1
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:             Options.max_subcompactions: 1
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:             Options.max_total_wal_size: 0
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                          Options.max_open_files: -1
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                          Options.bytes_per_sync: 0
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:       Options.compaction_readahead_size: 0
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                  Options.max_background_flushes: -1
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb: Compression algorithms supported:
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:         kZSTD supported: 0
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:         kXpressCompression supported: 0
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:         kBZip2Compression supported: 0
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:         kLZ4Compression supported: 1
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:         kZlibCompression supported: 1
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:         kLZ4HCCompression supported: 1
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:         kSnappyCompression supported: 1
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:           Options.merge_operator: 
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:        Options.compaction_filter: None
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:        Options.compaction_filter_factory: None
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:  Options.sst_partitioner_factory: None
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5578c1082aa0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5578c10a7350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:        Options.write_buffer_size: 33554432
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:  Options.max_write_buffer_number: 2
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:          Options.compression: NoCompression
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:       Options.prefix_extractor: nullptr
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:             Options.num_levels: 7
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                  Options.compression_opts.level: 32767
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:               Options.compression_opts.strategy: 0
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                  Options.compression_opts.enabled: false
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                        Options.arena_block_size: 1048576
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                Options.disable_auto_compactions: 0
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                   Options.inplace_update_support: 0
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                           Options.bloom_locality: 0
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                    Options.max_successive_merges: 0
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                Options.paranoid_file_checks: 0
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                Options.force_consistency_checks: 1
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                Options.report_bg_io_stats: 0
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                               Options.ttl: 2592000
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                       Options.enable_blob_files: false
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                           Options.min_blob_size: 0
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                          Options.blob_file_size: 268435456
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb:                Options.blob_file_starting_level: 0
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: e1ac8294-d02a-459a-8058-7f05c4f78e7d
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765444348693983, "job": 1, "event": "recovery_started", "wal_files": [9]}
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765444348801772, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 58492, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 137, "table_properties": {"data_size": 56966, "index_size": 168, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 325, "raw_key_size": 3182, "raw_average_key_size": 30, "raw_value_size": 54483, "raw_average_value_size": 523, "num_data_blocks": 9, "num_entries": 104, "num_filter_entries": 104, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765444348, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1ac8294-d02a-459a-8058-7f05c4f78e7d", "db_session_id": "U2WUY0PZPH8WS8N5I572", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765444348801943, "job": 1, "event": "recovery_finished"}
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Dec 11 09:12:28 compute-0 podman[74427]: 2025-12-11 09:12:28.807494531 +0000 UTC m=+0.114126985 container create 210280acc12c28943232be3a864787990d3aa8eb4488ced95e41f6412a021cb9 (image=quay.io/ceph/ceph:v19, name=beautiful_morse, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 11 09:12:28 compute-0 podman[74427]: 2025-12-11 09:12:28.71950301 +0000 UTC m=+0.026135484 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:12:28 compute-0 systemd[1]: Started libpod-conmon-210280acc12c28943232be3a864787990d3aa8eb4488ced95e41f6412a021cb9.scope.
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5578c10a8e00
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb: DB pointer 0x5578c11b2000
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 11 09:12:28 compute-0 ceph-mon[74426]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0   59.02 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.11              0.00         1    0.107       0      0       0.0       0.0
                                            Sum      2/0   59.02 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.11              0.00         1    0.107       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.11              0.00         1    0.107       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.11              0.00         1    0.107       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.32 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.32 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5578c10a7350#2 capacity: 512.00 MB usage: 0.84 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 1.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(2,0.48 KB,9.23872e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 11 09:12:28 compute-0 ceph-mon[74426]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:12:28 compute-0 ceph-mon[74426]: mon.compute-0@-1(???) e1 preinit fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:12:28 compute-0 ceph-mon[74426]: mon.compute-0@-1(???).mds e1 new map
Dec 11 09:12:28 compute-0 ceph-mon[74426]: mon.compute-0@-1(???).mds e1 print_map
                                           e1
                                           btime 2025-12-11T09:12:26:191373+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Dec 11 09:12:28 compute-0 ceph-mon[74426]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Dec 11 09:12:28 compute-0 ceph-mon[74426]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 11 09:12:28 compute-0 ceph-mon[74426]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 11 09:12:28 compute-0 ceph-mon[74426]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 11 09:12:28 compute-0 ceph-mon[74426]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Dec 11 09:12:28 compute-0 ceph-mon[74426]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Dec 11 09:12:28 compute-0 ceph-mon[74426]: mon.compute-0@0(probing) e1 win_standalone_election
Dec 11 09:12:28 compute-0 ceph-mon[74426]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Dec 11 09:12:28 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:12:28 compute-0 ceph-mon[74426]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 11 09:12:28 compute-0 ceph-mon[74426]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 11 09:12:28 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : monmap epoch 1
Dec 11 09:12:28 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:12:28 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : last_changed 2025-12-11T09:12:23.814502+0000
Dec 11 09:12:28 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : created 2025-12-11T09:12:23.814502+0000
Dec 11 09:12:28 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Dec 11 09:12:28 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : election_strategy: 1
Dec 11 09:12:28 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec 11 09:12:28 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 11 09:12:28 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : fsmap 
Dec 11 09:12:28 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Dec 11 09:12:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a43a2b4599804e1c7b40798a02dd03a5db3b4805f3a0a3f4bb6c0dfb679ff96/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a43a2b4599804e1c7b40798a02dd03a5db3b4805f3a0a3f4bb6c0dfb679ff96/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a43a2b4599804e1c7b40798a02dd03a5db3b4805f3a0a3f4bb6c0dfb679ff96/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:28 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Dec 11 09:12:28 compute-0 podman[74427]: 2025-12-11 09:12:28.954927222 +0000 UTC m=+0.261559706 container init 210280acc12c28943232be3a864787990d3aa8eb4488ced95e41f6412a021cb9 (image=quay.io/ceph/ceph:v19, name=beautiful_morse, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 11 09:12:28 compute-0 podman[74427]: 2025-12-11 09:12:28.96384965 +0000 UTC m=+0.270482104 container start 210280acc12c28943232be3a864787990d3aa8eb4488ced95e41f6412a021cb9 (image=quay.io/ceph/ceph:v19, name=beautiful_morse, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Dec 11 09:12:28 compute-0 ceph-mon[74426]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 11 09:12:28 compute-0 ceph-mon[74426]: monmap epoch 1
Dec 11 09:12:28 compute-0 ceph-mon[74426]: fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:12:28 compute-0 ceph-mon[74426]: last_changed 2025-12-11T09:12:23.814502+0000
Dec 11 09:12:28 compute-0 ceph-mon[74426]: created 2025-12-11T09:12:23.814502+0000
Dec 11 09:12:28 compute-0 ceph-mon[74426]: min_mon_release 19 (squid)
Dec 11 09:12:28 compute-0 ceph-mon[74426]: election_strategy: 1
Dec 11 09:12:28 compute-0 ceph-mon[74426]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec 11 09:12:28 compute-0 ceph-mon[74426]: fsmap 
Dec 11 09:12:28 compute-0 ceph-mon[74426]: osdmap e1: 0 total, 0 up, 0 in
Dec 11 09:12:28 compute-0 ceph-mon[74426]: mgrmap e1: no daemons active
Dec 11 09:12:28 compute-0 podman[74427]: 2025-12-11 09:12:28.986940539 +0000 UTC m=+0.293573023 container attach 210280acc12c28943232be3a864787990d3aa8eb4488ced95e41f6412a021cb9 (image=quay.io/ceph/ceph:v19, name=beautiful_morse, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:12:29 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0)
Dec 11 09:12:29 compute-0 systemd[1]: libpod-210280acc12c28943232be3a864787990d3aa8eb4488ced95e41f6412a021cb9.scope: Deactivated successfully.
Dec 11 09:12:29 compute-0 podman[74427]: 2025-12-11 09:12:29.175617166 +0000 UTC m=+0.482249630 container died 210280acc12c28943232be3a864787990d3aa8eb4488ced95e41f6412a021cb9 (image=quay.io/ceph/ceph:v19, name=beautiful_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 11 09:12:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a43a2b4599804e1c7b40798a02dd03a5db3b4805f3a0a3f4bb6c0dfb679ff96-merged.mount: Deactivated successfully.
Dec 11 09:12:29 compute-0 podman[74427]: 2025-12-11 09:12:29.22621745 +0000 UTC m=+0.532849905 container remove 210280acc12c28943232be3a864787990d3aa8eb4488ced95e41f6412a021cb9 (image=quay.io/ceph/ceph:v19, name=beautiful_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 11 09:12:29 compute-0 systemd[1]: libpod-conmon-210280acc12c28943232be3a864787990d3aa8eb4488ced95e41f6412a021cb9.scope: Deactivated successfully.
Dec 11 09:12:29 compute-0 podman[74519]: 2025-12-11 09:12:29.270925353 +0000 UTC m=+0.020665074 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:12:29 compute-0 podman[74519]: 2025-12-11 09:12:29.523735506 +0000 UTC m=+0.273475207 container create f7a75848070a6a3853153e3a283fea6066f92eaabbec3de31afdeb49ee7b0766 (image=quay.io/ceph/ceph:v19, name=goofy_feistel, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec 11 09:12:29 compute-0 systemd[1]: Started libpod-conmon-f7a75848070a6a3853153e3a283fea6066f92eaabbec3de31afdeb49ee7b0766.scope.
Dec 11 09:12:29 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:12:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd58d4717f0437f17404c7f9bf36839ee28ee268b37f0236927b4b1b4701de0f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd58d4717f0437f17404c7f9bf36839ee28ee268b37f0236927b4b1b4701de0f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd58d4717f0437f17404c7f9bf36839ee28ee268b37f0236927b4b1b4701de0f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:29 compute-0 podman[74519]: 2025-12-11 09:12:29.590488445 +0000 UTC m=+0.340228166 container init f7a75848070a6a3853153e3a283fea6066f92eaabbec3de31afdeb49ee7b0766 (image=quay.io/ceph/ceph:v19, name=goofy_feistel, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec 11 09:12:29 compute-0 podman[74519]: 2025-12-11 09:12:29.595665816 +0000 UTC m=+0.345405517 container start f7a75848070a6a3853153e3a283fea6066f92eaabbec3de31afdeb49ee7b0766 (image=quay.io/ceph/ceph:v19, name=goofy_feistel, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 11 09:12:29 compute-0 podman[74519]: 2025-12-11 09:12:29.626575949 +0000 UTC m=+0.376315670 container attach f7a75848070a6a3853153e3a283fea6066f92eaabbec3de31afdeb49ee7b0766 (image=quay.io/ceph/ceph:v19, name=goofy_feistel, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec 11 09:12:29 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0)
Dec 11 09:12:29 compute-0 systemd[1]: libpod-f7a75848070a6a3853153e3a283fea6066f92eaabbec3de31afdeb49ee7b0766.scope: Deactivated successfully.
Dec 11 09:12:29 compute-0 podman[74519]: 2025-12-11 09:12:29.805548354 +0000 UTC m=+0.555288075 container died f7a75848070a6a3853153e3a283fea6066f92eaabbec3de31afdeb49ee7b0766 (image=quay.io/ceph/ceph:v19, name=goofy_feistel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 11 09:12:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd58d4717f0437f17404c7f9bf36839ee28ee268b37f0236927b4b1b4701de0f-merged.mount: Deactivated successfully.
Dec 11 09:12:29 compute-0 podman[74519]: 2025-12-11 09:12:29.878553546 +0000 UTC m=+0.628293267 container remove f7a75848070a6a3853153e3a283fea6066f92eaabbec3de31afdeb49ee7b0766 (image=quay.io/ceph/ceph:v19, name=goofy_feistel, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 11 09:12:29 compute-0 systemd[1]: libpod-conmon-f7a75848070a6a3853153e3a283fea6066f92eaabbec3de31afdeb49ee7b0766.scope: Deactivated successfully.
Dec 11 09:12:29 compute-0 systemd[1]: Reloading.
Dec 11 09:12:30 compute-0 systemd-rc-local-generator[74600]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:12:30 compute-0 systemd-sysv-generator[74604]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:12:30 compute-0 systemd[1]: Reloading.
Dec 11 09:12:30 compute-0 systemd-rc-local-generator[74639]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:12:30 compute-0 systemd-sysv-generator[74643]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:12:30 compute-0 systemd[1]: Starting Ceph mgr.compute-0.wwpcae for 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060...
Dec 11 09:12:30 compute-0 podman[74697]: 2025-12-11 09:12:30.648701921 +0000 UTC m=+0.026073972 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:12:30 compute-0 podman[74697]: 2025-12-11 09:12:30.753504845 +0000 UTC m=+0.130876906 container create b761e15f43723ba37fb7e45df577a225dbb9a0d970d0f7ba161c4ecfd431fdb7 (image=quay.io/ceph/ceph:v19, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 11 09:12:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5beb78fb0a9f4bc1061852799a57b2a3c610ad8513cc292978e37c6723ef3ac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5beb78fb0a9f4bc1061852799a57b2a3c610ad8513cc292978e37c6723ef3ac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5beb78fb0a9f4bc1061852799a57b2a3c610ad8513cc292978e37c6723ef3ac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5beb78fb0a9f4bc1061852799a57b2a3c610ad8513cc292978e37c6723ef3ac/merged/var/lib/ceph/mgr/ceph-compute-0.wwpcae supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:30 compute-0 podman[74697]: 2025-12-11 09:12:30.818487389 +0000 UTC m=+0.195859430 container init b761e15f43723ba37fb7e45df577a225dbb9a0d970d0f7ba161c4ecfd431fdb7 (image=quay.io/ceph/ceph:v19, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid)
Dec 11 09:12:30 compute-0 podman[74697]: 2025-12-11 09:12:30.824127055 +0000 UTC m=+0.201499076 container start b761e15f43723ba37fb7e45df577a225dbb9a0d970d0f7ba161c4ecfd431fdb7 (image=quay.io/ceph/ceph:v19, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:12:30 compute-0 bash[74697]: b761e15f43723ba37fb7e45df577a225dbb9a0d970d0f7ba161c4ecfd431fdb7
Dec 11 09:12:30 compute-0 systemd[1]: Started Ceph mgr.compute-0.wwpcae for 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060.
Dec 11 09:12:30 compute-0 ceph-mgr[74715]: set uid:gid to 167:167 (ceph:ceph)
Dec 11 09:12:30 compute-0 ceph-mgr[74715]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec 11 09:12:30 compute-0 ceph-mgr[74715]: pidfile_write: ignore empty --pid-file
Dec 11 09:12:30 compute-0 podman[74716]: 2025-12-11 09:12:30.979606241 +0000 UTC m=+0.110688082 container create 5f068ed6c9acfaaf8099c5c14e03f8ec60b1592a6638b942cb01df14c0c31e37 (image=quay.io/ceph/ceph:v19, name=exciting_goldwasser, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Dec 11 09:12:30 compute-0 podman[74716]: 2025-12-11 09:12:30.955292035 +0000 UTC m=+0.086373906 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:12:31 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'alerts'
Dec 11 09:12:31 compute-0 systemd[1]: Started libpod-conmon-5f068ed6c9acfaaf8099c5c14e03f8ec60b1592a6638b942cb01df14c0c31e37.scope.
Dec 11 09:12:31 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:12:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd5ff2a85e70de0c40a623a39bef8fbcfbbe4da8e02cecb1d6346fe6476cdd0e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd5ff2a85e70de0c40a623a39bef8fbcfbbe4da8e02cecb1d6346fe6476cdd0e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd5ff2a85e70de0c40a623a39bef8fbcfbbe4da8e02cecb1d6346fe6476cdd0e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:31 compute-0 podman[74716]: 2025-12-11 09:12:31.083107871 +0000 UTC m=+0.214189742 container init 5f068ed6c9acfaaf8099c5c14e03f8ec60b1592a6638b942cb01df14c0c31e37 (image=quay.io/ceph/ceph:v19, name=exciting_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:12:31 compute-0 podman[74716]: 2025-12-11 09:12:31.094134394 +0000 UTC m=+0.225216235 container start 5f068ed6c9acfaaf8099c5c14e03f8ec60b1592a6638b942cb01df14c0c31e37 (image=quay.io/ceph/ceph:v19, name=exciting_goldwasser, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec 11 09:12:31 compute-0 podman[74716]: 2025-12-11 09:12:31.097741017 +0000 UTC m=+0.228822878 container attach 5f068ed6c9acfaaf8099c5c14e03f8ec60b1592a6638b942cb01df14c0c31e37 (image=quay.io/ceph/ceph:v19, name=exciting_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 11 09:12:31 compute-0 ceph-mgr[74715]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 11 09:12:31 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'balancer'
Dec 11 09:12:31 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:12:31.119+0000 7fd4371ba140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 11 09:12:31 compute-0 ceph-mgr[74715]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 11 09:12:31 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'cephadm'
Dec 11 09:12:31 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:12:31.209+0000 7fd4371ba140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 11 09:12:31 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec 11 09:12:31 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4056094776' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 11 09:12:31 compute-0 exciting_goldwasser[74752]: 
Dec 11 09:12:31 compute-0 exciting_goldwasser[74752]: {
Dec 11 09:12:31 compute-0 exciting_goldwasser[74752]:     "fsid": "31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060",
Dec 11 09:12:31 compute-0 exciting_goldwasser[74752]:     "health": {
Dec 11 09:12:31 compute-0 exciting_goldwasser[74752]:         "status": "HEALTH_OK",
Dec 11 09:12:31 compute-0 exciting_goldwasser[74752]:         "checks": {},
Dec 11 09:12:31 compute-0 exciting_goldwasser[74752]:         "mutes": []
Dec 11 09:12:31 compute-0 exciting_goldwasser[74752]:     },
Dec 11 09:12:31 compute-0 exciting_goldwasser[74752]:     "election_epoch": 5,
Dec 11 09:12:31 compute-0 exciting_goldwasser[74752]:     "quorum": [
Dec 11 09:12:31 compute-0 exciting_goldwasser[74752]:         0
Dec 11 09:12:31 compute-0 exciting_goldwasser[74752]:     ],
Dec 11 09:12:31 compute-0 exciting_goldwasser[74752]:     "quorum_names": [
Dec 11 09:12:31 compute-0 exciting_goldwasser[74752]:         "compute-0"
Dec 11 09:12:31 compute-0 exciting_goldwasser[74752]:     ],
Dec 11 09:12:31 compute-0 exciting_goldwasser[74752]:     "quorum_age": 2,
Dec 11 09:12:31 compute-0 exciting_goldwasser[74752]:     "monmap": {
Dec 11 09:12:31 compute-0 exciting_goldwasser[74752]:         "epoch": 1,
Dec 11 09:12:31 compute-0 exciting_goldwasser[74752]:         "min_mon_release_name": "squid",
Dec 11 09:12:31 compute-0 exciting_goldwasser[74752]:         "num_mons": 1
Dec 11 09:12:31 compute-0 exciting_goldwasser[74752]:     },
Dec 11 09:12:31 compute-0 exciting_goldwasser[74752]:     "osdmap": {
Dec 11 09:12:31 compute-0 exciting_goldwasser[74752]:         "epoch": 1,
Dec 11 09:12:31 compute-0 exciting_goldwasser[74752]:         "num_osds": 0,
Dec 11 09:12:31 compute-0 exciting_goldwasser[74752]:         "num_up_osds": 0,
Dec 11 09:12:31 compute-0 exciting_goldwasser[74752]:         "osd_up_since": 0,
Dec 11 09:12:31 compute-0 exciting_goldwasser[74752]:         "num_in_osds": 0,
Dec 11 09:12:31 compute-0 exciting_goldwasser[74752]:         "osd_in_since": 0,
Dec 11 09:12:31 compute-0 exciting_goldwasser[74752]:         "num_remapped_pgs": 0
Dec 11 09:12:31 compute-0 exciting_goldwasser[74752]:     },
Dec 11 09:12:31 compute-0 exciting_goldwasser[74752]:     "pgmap": {
Dec 11 09:12:31 compute-0 exciting_goldwasser[74752]:         "pgs_by_state": [],
Dec 11 09:12:31 compute-0 exciting_goldwasser[74752]:         "num_pgs": 0,
Dec 11 09:12:31 compute-0 exciting_goldwasser[74752]:         "num_pools": 0,
Dec 11 09:12:31 compute-0 exciting_goldwasser[74752]:         "num_objects": 0,
Dec 11 09:12:31 compute-0 exciting_goldwasser[74752]:         "data_bytes": 0,
Dec 11 09:12:31 compute-0 exciting_goldwasser[74752]:         "bytes_used": 0,
Dec 11 09:12:31 compute-0 exciting_goldwasser[74752]:         "bytes_avail": 0,
Dec 11 09:12:31 compute-0 exciting_goldwasser[74752]:         "bytes_total": 0
Dec 11 09:12:31 compute-0 exciting_goldwasser[74752]:     },
Dec 11 09:12:31 compute-0 exciting_goldwasser[74752]:     "fsmap": {
Dec 11 09:12:31 compute-0 exciting_goldwasser[74752]:         "epoch": 1,
Dec 11 09:12:31 compute-0 exciting_goldwasser[74752]:         "btime": "2025-12-11T09:12:26:191373+0000",
Dec 11 09:12:31 compute-0 exciting_goldwasser[74752]:         "by_rank": [],
Dec 11 09:12:31 compute-0 exciting_goldwasser[74752]:         "up:standby": 0
Dec 11 09:12:31 compute-0 exciting_goldwasser[74752]:     },
Dec 11 09:12:31 compute-0 exciting_goldwasser[74752]:     "mgrmap": {
Dec 11 09:12:31 compute-0 exciting_goldwasser[74752]:         "available": false,
Dec 11 09:12:31 compute-0 exciting_goldwasser[74752]:         "num_standbys": 0,
Dec 11 09:12:31 compute-0 exciting_goldwasser[74752]:         "modules": [
Dec 11 09:12:31 compute-0 exciting_goldwasser[74752]:             "iostat",
Dec 11 09:12:31 compute-0 exciting_goldwasser[74752]:             "nfs",
Dec 11 09:12:31 compute-0 exciting_goldwasser[74752]:             "restful"
Dec 11 09:12:31 compute-0 exciting_goldwasser[74752]:         ],
Dec 11 09:12:31 compute-0 exciting_goldwasser[74752]:         "services": {}
Dec 11 09:12:31 compute-0 exciting_goldwasser[74752]:     },
Dec 11 09:12:31 compute-0 exciting_goldwasser[74752]:     "servicemap": {
Dec 11 09:12:31 compute-0 exciting_goldwasser[74752]:         "epoch": 1,
Dec 11 09:12:31 compute-0 exciting_goldwasser[74752]:         "modified": "2025-12-11T09:12:26.198210+0000",
Dec 11 09:12:31 compute-0 exciting_goldwasser[74752]:         "services": {}
Dec 11 09:12:31 compute-0 exciting_goldwasser[74752]:     },
Dec 11 09:12:31 compute-0 exciting_goldwasser[74752]:     "progress_events": {}
Dec 11 09:12:31 compute-0 exciting_goldwasser[74752]: }
Dec 11 09:12:31 compute-0 systemd[1]: libpod-5f068ed6c9acfaaf8099c5c14e03f8ec60b1592a6638b942cb01df14c0c31e37.scope: Deactivated successfully.
Dec 11 09:12:31 compute-0 podman[74716]: 2025-12-11 09:12:31.326061896 +0000 UTC m=+0.457143757 container died 5f068ed6c9acfaaf8099c5c14e03f8ec60b1592a6638b942cb01df14c0c31e37 (image=quay.io/ceph/ceph:v19, name=exciting_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 11 09:12:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd5ff2a85e70de0c40a623a39bef8fbcfbbe4da8e02cecb1d6346fe6476cdd0e-merged.mount: Deactivated successfully.
Dec 11 09:12:31 compute-0 ceph-mon[74426]: from='client.? 192.168.122.100:0/4056094776' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 11 09:12:31 compute-0 podman[74716]: 2025-12-11 09:12:31.364407231 +0000 UTC m=+0.495489072 container remove 5f068ed6c9acfaaf8099c5c14e03f8ec60b1592a6638b942cb01df14c0c31e37 (image=quay.io/ceph/ceph:v19, name=exciting_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:12:31 compute-0 systemd[1]: libpod-conmon-5f068ed6c9acfaaf8099c5c14e03f8ec60b1592a6638b942cb01df14c0c31e37.scope: Deactivated successfully.
Dec 11 09:12:32 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'crash'
Dec 11 09:12:32 compute-0 ceph-mgr[74715]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 11 09:12:32 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'dashboard'
Dec 11 09:12:32 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:12:32.217+0000 7fd4371ba140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 11 09:12:32 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'devicehealth'
Dec 11 09:12:32 compute-0 ceph-mgr[74715]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 11 09:12:32 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'diskprediction_local'
Dec 11 09:12:32 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:12:32.966+0000 7fd4371ba140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 11 09:12:33 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 11 09:12:33 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 11 09:12:33 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]:   from numpy import show_config as show_numpy_config
Dec 11 09:12:33 compute-0 ceph-mgr[74715]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 11 09:12:33 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:12:33.187+0000 7fd4371ba140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 11 09:12:33 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'influx'
Dec 11 09:12:33 compute-0 ceph-mgr[74715]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 11 09:12:33 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:12:33.272+0000 7fd4371ba140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 11 09:12:33 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'insights'
Dec 11 09:12:33 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'iostat'
Dec 11 09:12:33 compute-0 ceph-mgr[74715]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 11 09:12:33 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:12:33.449+0000 7fd4371ba140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 11 09:12:33 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'k8sevents'
Dec 11 09:12:33 compute-0 podman[74800]: 2025-12-11 09:12:33.452657555 +0000 UTC m=+0.055059457 container create ff1acb12294484685ba275c2564b0910dfad29ced1cf73ad6d237fcdb612e83b (image=quay.io/ceph/ceph:v19, name=heuristic_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Dec 11 09:12:33 compute-0 systemd[1]: Started libpod-conmon-ff1acb12294484685ba275c2564b0910dfad29ced1cf73ad6d237fcdb612e83b.scope.
Dec 11 09:12:33 compute-0 podman[74800]: 2025-12-11 09:12:33.429263226 +0000 UTC m=+0.031665148 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:12:33 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:12:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f34989a7590d4325a317c293fbc751dbf82085f5a692765f0d7f77eb307845d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f34989a7590d4325a317c293fbc751dbf82085f5a692765f0d7f77eb307845d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f34989a7590d4325a317c293fbc751dbf82085f5a692765f0d7f77eb307845d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:33 compute-0 podman[74800]: 2025-12-11 09:12:33.549005075 +0000 UTC m=+0.151406997 container init ff1acb12294484685ba275c2564b0910dfad29ced1cf73ad6d237fcdb612e83b (image=quay.io/ceph/ceph:v19, name=heuristic_ellis, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec 11 09:12:33 compute-0 podman[74800]: 2025-12-11 09:12:33.555087854 +0000 UTC m=+0.157489756 container start ff1acb12294484685ba275c2564b0910dfad29ced1cf73ad6d237fcdb612e83b (image=quay.io/ceph/ceph:v19, name=heuristic_ellis, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:12:33 compute-0 podman[74800]: 2025-12-11 09:12:33.558982366 +0000 UTC m=+0.161384268 container attach ff1acb12294484685ba275c2564b0910dfad29ced1cf73ad6d237fcdb612e83b (image=quay.io/ceph/ceph:v19, name=heuristic_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Dec 11 09:12:33 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec 11 09:12:33 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2338200952' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 11 09:12:33 compute-0 heuristic_ellis[74817]: 
Dec 11 09:12:33 compute-0 heuristic_ellis[74817]: {
Dec 11 09:12:33 compute-0 heuristic_ellis[74817]:     "fsid": "31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060",
Dec 11 09:12:33 compute-0 heuristic_ellis[74817]:     "health": {
Dec 11 09:12:33 compute-0 heuristic_ellis[74817]:         "status": "HEALTH_OK",
Dec 11 09:12:33 compute-0 heuristic_ellis[74817]:         "checks": {},
Dec 11 09:12:33 compute-0 heuristic_ellis[74817]:         "mutes": []
Dec 11 09:12:33 compute-0 heuristic_ellis[74817]:     },
Dec 11 09:12:33 compute-0 heuristic_ellis[74817]:     "election_epoch": 5,
Dec 11 09:12:33 compute-0 heuristic_ellis[74817]:     "quorum": [
Dec 11 09:12:33 compute-0 heuristic_ellis[74817]:         0
Dec 11 09:12:33 compute-0 heuristic_ellis[74817]:     ],
Dec 11 09:12:33 compute-0 heuristic_ellis[74817]:     "quorum_names": [
Dec 11 09:12:33 compute-0 heuristic_ellis[74817]:         "compute-0"
Dec 11 09:12:33 compute-0 heuristic_ellis[74817]:     ],
Dec 11 09:12:33 compute-0 heuristic_ellis[74817]:     "quorum_age": 4,
Dec 11 09:12:33 compute-0 heuristic_ellis[74817]:     "monmap": {
Dec 11 09:12:33 compute-0 heuristic_ellis[74817]:         "epoch": 1,
Dec 11 09:12:33 compute-0 heuristic_ellis[74817]:         "min_mon_release_name": "squid",
Dec 11 09:12:33 compute-0 heuristic_ellis[74817]:         "num_mons": 1
Dec 11 09:12:33 compute-0 heuristic_ellis[74817]:     },
Dec 11 09:12:33 compute-0 heuristic_ellis[74817]:     "osdmap": {
Dec 11 09:12:33 compute-0 heuristic_ellis[74817]:         "epoch": 1,
Dec 11 09:12:33 compute-0 heuristic_ellis[74817]:         "num_osds": 0,
Dec 11 09:12:33 compute-0 heuristic_ellis[74817]:         "num_up_osds": 0,
Dec 11 09:12:33 compute-0 heuristic_ellis[74817]:         "osd_up_since": 0,
Dec 11 09:12:33 compute-0 heuristic_ellis[74817]:         "num_in_osds": 0,
Dec 11 09:12:33 compute-0 heuristic_ellis[74817]:         "osd_in_since": 0,
Dec 11 09:12:33 compute-0 heuristic_ellis[74817]:         "num_remapped_pgs": 0
Dec 11 09:12:33 compute-0 heuristic_ellis[74817]:     },
Dec 11 09:12:33 compute-0 heuristic_ellis[74817]:     "pgmap": {
Dec 11 09:12:33 compute-0 heuristic_ellis[74817]:         "pgs_by_state": [],
Dec 11 09:12:33 compute-0 heuristic_ellis[74817]:         "num_pgs": 0,
Dec 11 09:12:33 compute-0 heuristic_ellis[74817]:         "num_pools": 0,
Dec 11 09:12:33 compute-0 heuristic_ellis[74817]:         "num_objects": 0,
Dec 11 09:12:33 compute-0 heuristic_ellis[74817]:         "data_bytes": 0,
Dec 11 09:12:33 compute-0 heuristic_ellis[74817]:         "bytes_used": 0,
Dec 11 09:12:33 compute-0 heuristic_ellis[74817]:         "bytes_avail": 0,
Dec 11 09:12:33 compute-0 heuristic_ellis[74817]:         "bytes_total": 0
Dec 11 09:12:33 compute-0 heuristic_ellis[74817]:     },
Dec 11 09:12:33 compute-0 heuristic_ellis[74817]:     "fsmap": {
Dec 11 09:12:33 compute-0 heuristic_ellis[74817]:         "epoch": 1,
Dec 11 09:12:33 compute-0 heuristic_ellis[74817]:         "btime": "2025-12-11T09:12:26:191373+0000",
Dec 11 09:12:33 compute-0 heuristic_ellis[74817]:         "by_rank": [],
Dec 11 09:12:33 compute-0 heuristic_ellis[74817]:         "up:standby": 0
Dec 11 09:12:33 compute-0 heuristic_ellis[74817]:     },
Dec 11 09:12:33 compute-0 heuristic_ellis[74817]:     "mgrmap": {
Dec 11 09:12:33 compute-0 heuristic_ellis[74817]:         "available": false,
Dec 11 09:12:33 compute-0 heuristic_ellis[74817]:         "num_standbys": 0,
Dec 11 09:12:33 compute-0 heuristic_ellis[74817]:         "modules": [
Dec 11 09:12:33 compute-0 heuristic_ellis[74817]:             "iostat",
Dec 11 09:12:33 compute-0 heuristic_ellis[74817]:             "nfs",
Dec 11 09:12:33 compute-0 heuristic_ellis[74817]:             "restful"
Dec 11 09:12:33 compute-0 heuristic_ellis[74817]:         ],
Dec 11 09:12:33 compute-0 heuristic_ellis[74817]:         "services": {}
Dec 11 09:12:33 compute-0 heuristic_ellis[74817]:     },
Dec 11 09:12:33 compute-0 heuristic_ellis[74817]:     "servicemap": {
Dec 11 09:12:33 compute-0 heuristic_ellis[74817]:         "epoch": 1,
Dec 11 09:12:33 compute-0 heuristic_ellis[74817]:         "modified": "2025-12-11T09:12:26.198210+0000",
Dec 11 09:12:33 compute-0 heuristic_ellis[74817]:         "services": {}
Dec 11 09:12:33 compute-0 heuristic_ellis[74817]:     },
Dec 11 09:12:33 compute-0 heuristic_ellis[74817]:     "progress_events": {}
Dec 11 09:12:33 compute-0 heuristic_ellis[74817]: }
Dec 11 09:12:33 compute-0 systemd[1]: libpod-ff1acb12294484685ba275c2564b0910dfad29ced1cf73ad6d237fcdb612e83b.scope: Deactivated successfully.
Dec 11 09:12:33 compute-0 podman[74800]: 2025-12-11 09:12:33.773369768 +0000 UTC m=+0.375771690 container died ff1acb12294484685ba275c2564b0910dfad29ced1cf73ad6d237fcdb612e83b (image=quay.io/ceph/ceph:v19, name=heuristic_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec 11 09:12:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f34989a7590d4325a317c293fbc751dbf82085f5a692765f0d7f77eb307845d-merged.mount: Deactivated successfully.
Dec 11 09:12:33 compute-0 podman[74800]: 2025-12-11 09:12:33.805538415 +0000 UTC m=+0.407940317 container remove ff1acb12294484685ba275c2564b0910dfad29ced1cf73ad6d237fcdb612e83b (image=quay.io/ceph/ceph:v19, name=heuristic_ellis, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 11 09:12:33 compute-0 ceph-mon[74426]: from='client.? 192.168.122.100:0/2338200952' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 11 09:12:33 compute-0 systemd[1]: libpod-conmon-ff1acb12294484685ba275c2564b0910dfad29ced1cf73ad6d237fcdb612e83b.scope: Deactivated successfully.
Dec 11 09:12:33 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'localpool'
Dec 11 09:12:33 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'mds_autoscaler'
Dec 11 09:12:34 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'mirroring'
Dec 11 09:12:34 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'nfs'
Dec 11 09:12:34 compute-0 ceph-mgr[74715]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 11 09:12:34 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'orchestrator'
Dec 11 09:12:34 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:12:34.587+0000 7fd4371ba140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 11 09:12:34 compute-0 ceph-mgr[74715]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 11 09:12:34 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:12:34.836+0000 7fd4371ba140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 11 09:12:34 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'osd_perf_query'
Dec 11 09:12:34 compute-0 ceph-mgr[74715]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 11 09:12:34 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:12:34.925+0000 7fd4371ba140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 11 09:12:34 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'osd_support'
Dec 11 09:12:35 compute-0 ceph-mgr[74715]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 11 09:12:35 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:12:35.006+0000 7fd4371ba140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 11 09:12:35 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'pg_autoscaler'
Dec 11 09:12:35 compute-0 ceph-mgr[74715]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 11 09:12:35 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:12:35.097+0000 7fd4371ba140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 11 09:12:35 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'progress'
Dec 11 09:12:35 compute-0 ceph-mgr[74715]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 11 09:12:35 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:12:35.181+0000 7fd4371ba140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 11 09:12:35 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'prometheus'
Dec 11 09:12:35 compute-0 ceph-mgr[74715]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 11 09:12:35 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:12:35.568+0000 7fd4371ba140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 11 09:12:35 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'rbd_support'
Dec 11 09:12:35 compute-0 ceph-mgr[74715]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 11 09:12:35 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:12:35.686+0000 7fd4371ba140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 11 09:12:35 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'restful'
Dec 11 09:12:35 compute-0 podman[74855]: 2025-12-11 09:12:35.893738378 +0000 UTC m=+0.054821619 container create 61a40e013b93d6117df6694cd31795c5dc66db3c7995c8fc6ed23ba9efb3bbc4 (image=quay.io/ceph/ceph:v19, name=dazzling_yonath, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:12:35 compute-0 systemd[1]: Started libpod-conmon-61a40e013b93d6117df6694cd31795c5dc66db3c7995c8fc6ed23ba9efb3bbc4.scope.
Dec 11 09:12:35 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:12:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eae934fb8ec3f1fd93581070d708d0c1e538ab50d306e953b74e7410b12020ce/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eae934fb8ec3f1fd93581070d708d0c1e538ab50d306e953b74e7410b12020ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eae934fb8ec3f1fd93581070d708d0c1e538ab50d306e953b74e7410b12020ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:35 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'rgw'
Dec 11 09:12:35 compute-0 podman[74855]: 2025-12-11 09:12:35.958396701 +0000 UTC m=+0.119479962 container init 61a40e013b93d6117df6694cd31795c5dc66db3c7995c8fc6ed23ba9efb3bbc4 (image=quay.io/ceph/ceph:v19, name=dazzling_yonath, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 11 09:12:35 compute-0 podman[74855]: 2025-12-11 09:12:35.964687975 +0000 UTC m=+0.125771216 container start 61a40e013b93d6117df6694cd31795c5dc66db3c7995c8fc6ed23ba9efb3bbc4 (image=quay.io/ceph/ceph:v19, name=dazzling_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec 11 09:12:35 compute-0 podman[74855]: 2025-12-11 09:12:35.871400137 +0000 UTC m=+0.032483408 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:12:35 compute-0 podman[74855]: 2025-12-11 09:12:35.968659901 +0000 UTC m=+0.129743172 container attach 61a40e013b93d6117df6694cd31795c5dc66db3c7995c8fc6ed23ba9efb3bbc4 (image=quay.io/ceph/ceph:v19, name=dazzling_yonath, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 11 09:12:36 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec 11 09:12:36 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3606604342' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 11 09:12:36 compute-0 dazzling_yonath[74872]: 
Dec 11 09:12:36 compute-0 dazzling_yonath[74872]: {
Dec 11 09:12:36 compute-0 dazzling_yonath[74872]:     "fsid": "31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060",
Dec 11 09:12:36 compute-0 dazzling_yonath[74872]:     "health": {
Dec 11 09:12:36 compute-0 dazzling_yonath[74872]:         "status": "HEALTH_OK",
Dec 11 09:12:36 compute-0 dazzling_yonath[74872]:         "checks": {},
Dec 11 09:12:36 compute-0 dazzling_yonath[74872]:         "mutes": []
Dec 11 09:12:36 compute-0 dazzling_yonath[74872]:     },
Dec 11 09:12:36 compute-0 dazzling_yonath[74872]:     "election_epoch": 5,
Dec 11 09:12:36 compute-0 dazzling_yonath[74872]:     "quorum": [
Dec 11 09:12:36 compute-0 dazzling_yonath[74872]:         0
Dec 11 09:12:36 compute-0 dazzling_yonath[74872]:     ],
Dec 11 09:12:36 compute-0 dazzling_yonath[74872]:     "quorum_names": [
Dec 11 09:12:36 compute-0 dazzling_yonath[74872]:         "compute-0"
Dec 11 09:12:36 compute-0 dazzling_yonath[74872]:     ],
Dec 11 09:12:36 compute-0 dazzling_yonath[74872]:     "quorum_age": 7,
Dec 11 09:12:36 compute-0 dazzling_yonath[74872]:     "monmap": {
Dec 11 09:12:36 compute-0 dazzling_yonath[74872]:         "epoch": 1,
Dec 11 09:12:36 compute-0 dazzling_yonath[74872]:         "min_mon_release_name": "squid",
Dec 11 09:12:36 compute-0 dazzling_yonath[74872]:         "num_mons": 1
Dec 11 09:12:36 compute-0 dazzling_yonath[74872]:     },
Dec 11 09:12:36 compute-0 dazzling_yonath[74872]:     "osdmap": {
Dec 11 09:12:36 compute-0 dazzling_yonath[74872]:         "epoch": 1,
Dec 11 09:12:36 compute-0 dazzling_yonath[74872]:         "num_osds": 0,
Dec 11 09:12:36 compute-0 dazzling_yonath[74872]:         "num_up_osds": 0,
Dec 11 09:12:36 compute-0 dazzling_yonath[74872]:         "osd_up_since": 0,
Dec 11 09:12:36 compute-0 dazzling_yonath[74872]:         "num_in_osds": 0,
Dec 11 09:12:36 compute-0 dazzling_yonath[74872]:         "osd_in_since": 0,
Dec 11 09:12:36 compute-0 dazzling_yonath[74872]:         "num_remapped_pgs": 0
Dec 11 09:12:36 compute-0 dazzling_yonath[74872]:     },
Dec 11 09:12:36 compute-0 dazzling_yonath[74872]:     "pgmap": {
Dec 11 09:12:36 compute-0 dazzling_yonath[74872]:         "pgs_by_state": [],
Dec 11 09:12:36 compute-0 dazzling_yonath[74872]:         "num_pgs": 0,
Dec 11 09:12:36 compute-0 dazzling_yonath[74872]:         "num_pools": 0,
Dec 11 09:12:36 compute-0 dazzling_yonath[74872]:         "num_objects": 0,
Dec 11 09:12:36 compute-0 dazzling_yonath[74872]:         "data_bytes": 0,
Dec 11 09:12:36 compute-0 dazzling_yonath[74872]:         "bytes_used": 0,
Dec 11 09:12:36 compute-0 dazzling_yonath[74872]:         "bytes_avail": 0,
Dec 11 09:12:36 compute-0 dazzling_yonath[74872]:         "bytes_total": 0
Dec 11 09:12:36 compute-0 dazzling_yonath[74872]:     },
Dec 11 09:12:36 compute-0 dazzling_yonath[74872]:     "fsmap": {
Dec 11 09:12:36 compute-0 dazzling_yonath[74872]:         "epoch": 1,
Dec 11 09:12:36 compute-0 dazzling_yonath[74872]:         "btime": "2025-12-11T09:12:26:191373+0000",
Dec 11 09:12:36 compute-0 dazzling_yonath[74872]:         "by_rank": [],
Dec 11 09:12:36 compute-0 dazzling_yonath[74872]:         "up:standby": 0
Dec 11 09:12:36 compute-0 dazzling_yonath[74872]:     },
Dec 11 09:12:36 compute-0 dazzling_yonath[74872]:     "mgrmap": {
Dec 11 09:12:36 compute-0 dazzling_yonath[74872]:         "available": false,
Dec 11 09:12:36 compute-0 dazzling_yonath[74872]:         "num_standbys": 0,
Dec 11 09:12:36 compute-0 dazzling_yonath[74872]:         "modules": [
Dec 11 09:12:36 compute-0 dazzling_yonath[74872]:             "iostat",
Dec 11 09:12:36 compute-0 dazzling_yonath[74872]:             "nfs",
Dec 11 09:12:36 compute-0 dazzling_yonath[74872]:             "restful"
Dec 11 09:12:36 compute-0 dazzling_yonath[74872]:         ],
Dec 11 09:12:36 compute-0 dazzling_yonath[74872]:         "services": {}
Dec 11 09:12:36 compute-0 dazzling_yonath[74872]:     },
Dec 11 09:12:36 compute-0 dazzling_yonath[74872]:     "servicemap": {
Dec 11 09:12:36 compute-0 dazzling_yonath[74872]:         "epoch": 1,
Dec 11 09:12:36 compute-0 dazzling_yonath[74872]:         "modified": "2025-12-11T09:12:26.198210+0000",
Dec 11 09:12:36 compute-0 dazzling_yonath[74872]:         "services": {}
Dec 11 09:12:36 compute-0 dazzling_yonath[74872]:     },
Dec 11 09:12:36 compute-0 dazzling_yonath[74872]:     "progress_events": {}
Dec 11 09:12:36 compute-0 dazzling_yonath[74872]: }
Dec 11 09:12:36 compute-0 systemd[1]: libpod-61a40e013b93d6117df6694cd31795c5dc66db3c7995c8fc6ed23ba9efb3bbc4.scope: Deactivated successfully.
Dec 11 09:12:36 compute-0 podman[74855]: 2025-12-11 09:12:36.17962919 +0000 UTC m=+0.340712441 container died 61a40e013b93d6117df6694cd31795c5dc66db3c7995c8fc6ed23ba9efb3bbc4 (image=quay.io/ceph/ceph:v19, name=dazzling_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325)
Dec 11 09:12:36 compute-0 ceph-mgr[74715]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 11 09:12:36 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'rook'
Dec 11 09:12:36 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:12:36.195+0000 7fd4371ba140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 11 09:12:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-eae934fb8ec3f1fd93581070d708d0c1e538ab50d306e953b74e7410b12020ce-merged.mount: Deactivated successfully.
Dec 11 09:12:36 compute-0 ceph-mon[74426]: from='client.? 192.168.122.100:0/3606604342' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 11 09:12:36 compute-0 podman[74855]: 2025-12-11 09:12:36.225353807 +0000 UTC m=+0.386437048 container remove 61a40e013b93d6117df6694cd31795c5dc66db3c7995c8fc6ed23ba9efb3bbc4 (image=quay.io/ceph/ceph:v19, name=dazzling_yonath, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec 11 09:12:36 compute-0 systemd[1]: libpod-conmon-61a40e013b93d6117df6694cd31795c5dc66db3c7995c8fc6ed23ba9efb3bbc4.scope: Deactivated successfully.
Dec 11 09:12:36 compute-0 ceph-mgr[74715]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 11 09:12:36 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'selftest'
Dec 11 09:12:36 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:12:36.841+0000 7fd4371ba140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 11 09:12:36 compute-0 ceph-mgr[74715]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 11 09:12:36 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'snap_schedule'
Dec 11 09:12:36 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:12:36.925+0000 7fd4371ba140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 11 09:12:37 compute-0 ceph-mgr[74715]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 11 09:12:37 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'stats'
Dec 11 09:12:37 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:12:37.014+0000 7fd4371ba140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 11 09:12:37 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'status'
Dec 11 09:12:37 compute-0 ceph-mgr[74715]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec 11 09:12:37 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'telegraf'
Dec 11 09:12:37 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:12:37.192+0000 7fd4371ba140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec 11 09:12:37 compute-0 ceph-mgr[74715]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 11 09:12:37 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'telemetry'
Dec 11 09:12:37 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:12:37.283+0000 7fd4371ba140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 11 09:12:37 compute-0 ceph-mgr[74715]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 11 09:12:37 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'test_orchestrator'
Dec 11 09:12:37 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:12:37.485+0000 7fd4371ba140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 11 09:12:37 compute-0 ceph-mgr[74715]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 11 09:12:37 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'volumes'
Dec 11 09:12:37 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:12:37.743+0000 7fd4371ba140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 11 09:12:38 compute-0 ceph-mgr[74715]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 11 09:12:38 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'zabbix'
Dec 11 09:12:38 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:12:38.038+0000 7fd4371ba140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 11 09:12:38 compute-0 ceph-mgr[74715]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 11 09:12:38 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:12:38.117+0000 7fd4371ba140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 11 09:12:38 compute-0 ceph-mgr[74715]: ms_deliver_dispatch: unhandled message 0x55948fb969c0 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec 11 09:12:38 compute-0 ceph-mon[74426]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.wwpcae
Dec 11 09:12:38 compute-0 ceph-mgr[74715]: mgr handle_mgr_map Activating!
Dec 11 09:12:38 compute-0 ceph-mgr[74715]: mgr handle_mgr_map I am now activating
Dec 11 09:12:38 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.wwpcae(active, starting, since 0.0131377s)
Dec 11 09:12:38 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Dec 11 09:12:38 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2418747803' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec 11 09:12:38 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).mds e1 all = 1
Dec 11 09:12:38 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec 11 09:12:38 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2418747803' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 11 09:12:38 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Dec 11 09:12:38 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2418747803' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec 11 09:12:38 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec 11 09:12:38 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2418747803' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 11 09:12:38 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.wwpcae", "id": "compute-0.wwpcae"} v 0)
Dec 11 09:12:38 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2418747803' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mgr metadata", "who": "compute-0.wwpcae", "id": "compute-0.wwpcae"}]: dispatch
Dec 11 09:12:38 compute-0 ceph-mgr[74715]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:12:38 compute-0 ceph-mgr[74715]: mgr load Constructed class from module: balancer
Dec 11 09:12:38 compute-0 ceph-mgr[74715]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:12:38 compute-0 ceph-mgr[74715]: mgr load Constructed class from module: crash
Dec 11 09:12:38 compute-0 ceph-mon[74426]: log_channel(cluster) log [INF] : Manager daemon compute-0.wwpcae is now available
Dec 11 09:12:38 compute-0 ceph-mgr[74715]: [balancer INFO root] Starting
Dec 11 09:12:38 compute-0 ceph-mgr[74715]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:12:38 compute-0 ceph-mgr[74715]: mgr load Constructed class from module: devicehealth
Dec 11 09:12:38 compute-0 ceph-mgr[74715]: [devicehealth INFO root] Starting
Dec 11 09:12:38 compute-0 ceph-mgr[74715]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:12:38 compute-0 ceph-mgr[74715]: mgr load Constructed class from module: iostat
Dec 11 09:12:38 compute-0 ceph-mgr[74715]: [balancer INFO root] Optimize plan auto_2025-12-11_09:12:38
Dec 11 09:12:38 compute-0 ceph-mgr[74715]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 11 09:12:38 compute-0 ceph-mgr[74715]: [balancer INFO root] do_upmap
Dec 11 09:12:38 compute-0 ceph-mgr[74715]: [balancer INFO root] No pools available
Dec 11 09:12:38 compute-0 ceph-mgr[74715]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:12:38 compute-0 ceph-mgr[74715]: mgr load Constructed class from module: nfs
Dec 11 09:12:38 compute-0 ceph-mgr[74715]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:12:38 compute-0 ceph-mgr[74715]: mgr load Constructed class from module: orchestrator
Dec 11 09:12:38 compute-0 ceph-mgr[74715]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:12:38 compute-0 ceph-mgr[74715]: mgr load Constructed class from module: pg_autoscaler
Dec 11 09:12:38 compute-0 ceph-mgr[74715]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:12:38 compute-0 ceph-mgr[74715]: mgr load Constructed class from module: progress
Dec 11 09:12:38 compute-0 ceph-mgr[74715]: [progress INFO root] Loading...
Dec 11 09:12:38 compute-0 ceph-mgr[74715]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:12:38 compute-0 ceph-mgr[74715]: [progress INFO root] No stored events to load
Dec 11 09:12:38 compute-0 ceph-mgr[74715]: [progress INFO root] Loaded [] historic events
Dec 11 09:12:38 compute-0 ceph-mgr[74715]: [progress INFO root] Loaded OSDMap, ready.
Dec 11 09:12:38 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] _maybe_adjust
Dec 11 09:12:38 compute-0 ceph-mgr[74715]: [rbd_support INFO root] recovery thread starting
Dec 11 09:12:38 compute-0 ceph-mgr[74715]: [rbd_support INFO root] starting setup
Dec 11 09:12:38 compute-0 ceph-mgr[74715]: mgr load Constructed class from module: rbd_support
Dec 11 09:12:38 compute-0 ceph-mgr[74715]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:12:38 compute-0 ceph-mgr[74715]: mgr load Constructed class from module: restful
Dec 11 09:12:38 compute-0 ceph-mgr[74715]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:12:38 compute-0 ceph-mgr[74715]: mgr load Constructed class from module: status
Dec 11 09:12:38 compute-0 ceph-mgr[74715]: [restful INFO root] server_addr: :: server_port: 8003
Dec 11 09:12:38 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wwpcae/mirror_snapshot_schedule"} v 0)
Dec 11 09:12:38 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2418747803' entity='mgr.compute-0.wwpcae' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wwpcae/mirror_snapshot_schedule"}]: dispatch
Dec 11 09:12:38 compute-0 ceph-mgr[74715]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:12:38 compute-0 ceph-mgr[74715]: mgr load Constructed class from module: telemetry
Dec 11 09:12:38 compute-0 ceph-mgr[74715]: [restful WARNING root] server not running: no certificate configured
Dec 11 09:12:38 compute-0 ceph-mgr[74715]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 11 09:12:38 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0)
Dec 11 09:12:38 compute-0 ceph-mgr[74715]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:12:38 compute-0 ceph-mgr[74715]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec 11 09:12:38 compute-0 ceph-mgr[74715]: [rbd_support INFO root] PerfHandler: starting
Dec 11 09:12:38 compute-0 ceph-mgr[74715]: [rbd_support INFO root] TaskHandler: starting
Dec 11 09:12:38 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wwpcae/trash_purge_schedule"} v 0)
Dec 11 09:12:38 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2418747803' entity='mgr.compute-0.wwpcae' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wwpcae/trash_purge_schedule"}]: dispatch
Dec 11 09:12:38 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2418747803' entity='mgr.compute-0.wwpcae' 
Dec 11 09:12:38 compute-0 ceph-mgr[74715]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 11 09:12:38 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0)
Dec 11 09:12:38 compute-0 ceph-mgr[74715]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec 11 09:12:38 compute-0 ceph-mgr[74715]: [rbd_support INFO root] setup complete
Dec 11 09:12:38 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2418747803' entity='mgr.compute-0.wwpcae' 
Dec 11 09:12:38 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0)
Dec 11 09:12:38 compute-0 ceph-mgr[74715]: mgr load Constructed class from module: volumes
Dec 11 09:12:38 compute-0 ceph-mon[74426]: Activating manager daemon compute-0.wwpcae
Dec 11 09:12:38 compute-0 ceph-mon[74426]: mgrmap e2: compute-0.wwpcae(active, starting, since 0.0131377s)
Dec 11 09:12:38 compute-0 ceph-mon[74426]: from='mgr.14102 192.168.122.100:0/2418747803' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec 11 09:12:38 compute-0 ceph-mon[74426]: from='mgr.14102 192.168.122.100:0/2418747803' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 11 09:12:38 compute-0 ceph-mon[74426]: from='mgr.14102 192.168.122.100:0/2418747803' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec 11 09:12:38 compute-0 ceph-mon[74426]: from='mgr.14102 192.168.122.100:0/2418747803' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 11 09:12:38 compute-0 ceph-mon[74426]: from='mgr.14102 192.168.122.100:0/2418747803' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mgr metadata", "who": "compute-0.wwpcae", "id": "compute-0.wwpcae"}]: dispatch
Dec 11 09:12:38 compute-0 ceph-mon[74426]: Manager daemon compute-0.wwpcae is now available
Dec 11 09:12:38 compute-0 ceph-mon[74426]: from='mgr.14102 192.168.122.100:0/2418747803' entity='mgr.compute-0.wwpcae' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wwpcae/mirror_snapshot_schedule"}]: dispatch
Dec 11 09:12:38 compute-0 ceph-mon[74426]: from='mgr.14102 192.168.122.100:0/2418747803' entity='mgr.compute-0.wwpcae' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wwpcae/trash_purge_schedule"}]: dispatch
Dec 11 09:12:38 compute-0 ceph-mon[74426]: from='mgr.14102 192.168.122.100:0/2418747803' entity='mgr.compute-0.wwpcae' 
Dec 11 09:12:38 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2418747803' entity='mgr.compute-0.wwpcae' 
Dec 11 09:12:38 compute-0 podman[74989]: 2025-12-11 09:12:38.345000062 +0000 UTC m=+0.079988366 container create fe29c00ed1bb0269d4c3bfb4a9bb79987495f6fe4721edf23a1af903cf1bd0d7 (image=quay.io/ceph/ceph:v19, name=goofy_galois, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:12:38 compute-0 systemd[1]: Started libpod-conmon-fe29c00ed1bb0269d4c3bfb4a9bb79987495f6fe4721edf23a1af903cf1bd0d7.scope.
Dec 11 09:12:38 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:12:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24665e56ef4bba9b9e9ac2ccfb7db4793ba46cd65b6abf9f1566044e66aedfec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24665e56ef4bba9b9e9ac2ccfb7db4793ba46cd65b6abf9f1566044e66aedfec/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24665e56ef4bba9b9e9ac2ccfb7db4793ba46cd65b6abf9f1566044e66aedfec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:38 compute-0 podman[74989]: 2025-12-11 09:12:38.316914985 +0000 UTC m=+0.051903319 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:12:38 compute-0 podman[74989]: 2025-12-11 09:12:38.41773285 +0000 UTC m=+0.152721164 container init fe29c00ed1bb0269d4c3bfb4a9bb79987495f6fe4721edf23a1af903cf1bd0d7 (image=quay.io/ceph/ceph:v19, name=goofy_galois, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:12:38 compute-0 podman[74989]: 2025-12-11 09:12:38.422532414 +0000 UTC m=+0.157520698 container start fe29c00ed1bb0269d4c3bfb4a9bb79987495f6fe4721edf23a1af903cf1bd0d7 (image=quay.io/ceph/ceph:v19, name=goofy_galois, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec 11 09:12:38 compute-0 podman[74989]: 2025-12-11 09:12:38.428734595 +0000 UTC m=+0.163722899 container attach fe29c00ed1bb0269d4c3bfb4a9bb79987495f6fe4721edf23a1af903cf1bd0d7 (image=quay.io/ceph/ceph:v19, name=goofy_galois, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:12:38 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec 11 09:12:38 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3891272666' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 11 09:12:38 compute-0 goofy_galois[75006]: 
Dec 11 09:12:38 compute-0 goofy_galois[75006]: {
Dec 11 09:12:38 compute-0 goofy_galois[75006]:     "fsid": "31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060",
Dec 11 09:12:38 compute-0 goofy_galois[75006]:     "health": {
Dec 11 09:12:38 compute-0 goofy_galois[75006]:         "status": "HEALTH_OK",
Dec 11 09:12:38 compute-0 goofy_galois[75006]:         "checks": {},
Dec 11 09:12:38 compute-0 goofy_galois[75006]:         "mutes": []
Dec 11 09:12:38 compute-0 goofy_galois[75006]:     },
Dec 11 09:12:38 compute-0 goofy_galois[75006]:     "election_epoch": 5,
Dec 11 09:12:38 compute-0 goofy_galois[75006]:     "quorum": [
Dec 11 09:12:38 compute-0 goofy_galois[75006]:         0
Dec 11 09:12:38 compute-0 goofy_galois[75006]:     ],
Dec 11 09:12:38 compute-0 goofy_galois[75006]:     "quorum_names": [
Dec 11 09:12:38 compute-0 goofy_galois[75006]:         "compute-0"
Dec 11 09:12:38 compute-0 goofy_galois[75006]:     ],
Dec 11 09:12:38 compute-0 goofy_galois[75006]:     "quorum_age": 9,
Dec 11 09:12:38 compute-0 goofy_galois[75006]:     "monmap": {
Dec 11 09:12:38 compute-0 goofy_galois[75006]:         "epoch": 1,
Dec 11 09:12:38 compute-0 goofy_galois[75006]:         "min_mon_release_name": "squid",
Dec 11 09:12:38 compute-0 goofy_galois[75006]:         "num_mons": 1
Dec 11 09:12:38 compute-0 goofy_galois[75006]:     },
Dec 11 09:12:38 compute-0 goofy_galois[75006]:     "osdmap": {
Dec 11 09:12:38 compute-0 goofy_galois[75006]:         "epoch": 1,
Dec 11 09:12:38 compute-0 goofy_galois[75006]:         "num_osds": 0,
Dec 11 09:12:38 compute-0 goofy_galois[75006]:         "num_up_osds": 0,
Dec 11 09:12:38 compute-0 goofy_galois[75006]:         "osd_up_since": 0,
Dec 11 09:12:38 compute-0 goofy_galois[75006]:         "num_in_osds": 0,
Dec 11 09:12:38 compute-0 goofy_galois[75006]:         "osd_in_since": 0,
Dec 11 09:12:38 compute-0 goofy_galois[75006]:         "num_remapped_pgs": 0
Dec 11 09:12:38 compute-0 goofy_galois[75006]:     },
Dec 11 09:12:38 compute-0 goofy_galois[75006]:     "pgmap": {
Dec 11 09:12:38 compute-0 goofy_galois[75006]:         "pgs_by_state": [],
Dec 11 09:12:38 compute-0 goofy_galois[75006]:         "num_pgs": 0,
Dec 11 09:12:38 compute-0 goofy_galois[75006]:         "num_pools": 0,
Dec 11 09:12:38 compute-0 goofy_galois[75006]:         "num_objects": 0,
Dec 11 09:12:38 compute-0 goofy_galois[75006]:         "data_bytes": 0,
Dec 11 09:12:38 compute-0 goofy_galois[75006]:         "bytes_used": 0,
Dec 11 09:12:38 compute-0 goofy_galois[75006]:         "bytes_avail": 0,
Dec 11 09:12:38 compute-0 goofy_galois[75006]:         "bytes_total": 0
Dec 11 09:12:38 compute-0 goofy_galois[75006]:     },
Dec 11 09:12:38 compute-0 goofy_galois[75006]:     "fsmap": {
Dec 11 09:12:38 compute-0 goofy_galois[75006]:         "epoch": 1,
Dec 11 09:12:38 compute-0 goofy_galois[75006]:         "btime": "2025-12-11T09:12:26:191373+0000",
Dec 11 09:12:38 compute-0 goofy_galois[75006]:         "by_rank": [],
Dec 11 09:12:38 compute-0 goofy_galois[75006]:         "up:standby": 0
Dec 11 09:12:38 compute-0 goofy_galois[75006]:     },
Dec 11 09:12:38 compute-0 goofy_galois[75006]:     "mgrmap": {
Dec 11 09:12:38 compute-0 goofy_galois[75006]:         "available": false,
Dec 11 09:12:38 compute-0 goofy_galois[75006]:         "num_standbys": 0,
Dec 11 09:12:38 compute-0 goofy_galois[75006]:         "modules": [
Dec 11 09:12:38 compute-0 goofy_galois[75006]:             "iostat",
Dec 11 09:12:38 compute-0 goofy_galois[75006]:             "nfs",
Dec 11 09:12:38 compute-0 goofy_galois[75006]:             "restful"
Dec 11 09:12:38 compute-0 goofy_galois[75006]:         ],
Dec 11 09:12:38 compute-0 goofy_galois[75006]:         "services": {}
Dec 11 09:12:38 compute-0 goofy_galois[75006]:     },
Dec 11 09:12:38 compute-0 goofy_galois[75006]:     "servicemap": {
Dec 11 09:12:38 compute-0 goofy_galois[75006]:         "epoch": 1,
Dec 11 09:12:38 compute-0 goofy_galois[75006]:         "modified": "2025-12-11T09:12:26.198210+0000",
Dec 11 09:12:38 compute-0 goofy_galois[75006]:         "services": {}
Dec 11 09:12:38 compute-0 goofy_galois[75006]:     },
Dec 11 09:12:38 compute-0 goofy_galois[75006]:     "progress_events": {}
Dec 11 09:12:38 compute-0 goofy_galois[75006]: }
Dec 11 09:12:38 compute-0 systemd[1]: libpod-fe29c00ed1bb0269d4c3bfb4a9bb79987495f6fe4721edf23a1af903cf1bd0d7.scope: Deactivated successfully.
Dec 11 09:12:38 compute-0 podman[74989]: 2025-12-11 09:12:38.636577757 +0000 UTC m=+0.371566041 container died fe29c00ed1bb0269d4c3bfb4a9bb79987495f6fe4721edf23a1af903cf1bd0d7 (image=quay.io/ceph/ceph:v19, name=goofy_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:12:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-24665e56ef4bba9b9e9ac2ccfb7db4793ba46cd65b6abf9f1566044e66aedfec-merged.mount: Deactivated successfully.
Dec 11 09:12:38 compute-0 podman[74989]: 2025-12-11 09:12:38.673401772 +0000 UTC m=+0.408390056 container remove fe29c00ed1bb0269d4c3bfb4a9bb79987495f6fe4721edf23a1af903cf1bd0d7 (image=quay.io/ceph/ceph:v19, name=goofy_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec 11 09:12:38 compute-0 systemd[1]: libpod-conmon-fe29c00ed1bb0269d4c3bfb4a9bb79987495f6fe4721edf23a1af903cf1bd0d7.scope: Deactivated successfully.
Dec 11 09:12:39 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.wwpcae(active, since 1.0252s)
Dec 11 09:12:39 compute-0 ceph-mon[74426]: from='mgr.14102 192.168.122.100:0/2418747803' entity='mgr.compute-0.wwpcae' 
Dec 11 09:12:39 compute-0 ceph-mon[74426]: from='mgr.14102 192.168.122.100:0/2418747803' entity='mgr.compute-0.wwpcae' 
Dec 11 09:12:39 compute-0 ceph-mon[74426]: from='client.? 192.168.122.100:0/3891272666' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 11 09:12:39 compute-0 ceph-mon[74426]: mgrmap e3: compute-0.wwpcae(active, since 1.0252s)
Dec 11 09:12:40 compute-0 ceph-mgr[74715]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 11 09:12:40 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.wwpcae(active, since 2s)
Dec 11 09:12:40 compute-0 podman[75043]: 2025-12-11 09:12:40.745123844 +0000 UTC m=+0.043718640 container create 207fbda41da6f259631d01a2591dfe3a860ba4ec219ca32daf5dc3cc90767cf8 (image=quay.io/ceph/ceph:v19, name=angry_moser, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:12:40 compute-0 systemd[1]: Started libpod-conmon-207fbda41da6f259631d01a2591dfe3a860ba4ec219ca32daf5dc3cc90767cf8.scope.
Dec 11 09:12:40 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:12:40 compute-0 podman[75043]: 2025-12-11 09:12:40.729769901 +0000 UTC m=+0.028364717 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:12:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1fd4995b8551a4d753f6b5620eb6068b7e91077703a988f16da2f5cd9165be8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1fd4995b8551a4d753f6b5620eb6068b7e91077703a988f16da2f5cd9165be8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1fd4995b8551a4d753f6b5620eb6068b7e91077703a988f16da2f5cd9165be8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:40 compute-0 podman[75043]: 2025-12-11 09:12:40.858656692 +0000 UTC m=+0.157251538 container init 207fbda41da6f259631d01a2591dfe3a860ba4ec219ca32daf5dc3cc90767cf8 (image=quay.io/ceph/ceph:v19, name=angry_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Dec 11 09:12:40 compute-0 podman[75043]: 2025-12-11 09:12:40.8656182 +0000 UTC m=+0.164213036 container start 207fbda41da6f259631d01a2591dfe3a860ba4ec219ca32daf5dc3cc90767cf8 (image=quay.io/ceph/ceph:v19, name=angry_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec 11 09:12:40 compute-0 podman[75043]: 2025-12-11 09:12:40.869634947 +0000 UTC m=+0.168229773 container attach 207fbda41da6f259631d01a2591dfe3a860ba4ec219ca32daf5dc3cc90767cf8 (image=quay.io/ceph/ceph:v19, name=angry_moser, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 11 09:12:41 compute-0 ceph-mon[74426]: mgrmap e4: compute-0.wwpcae(active, since 2s)
Dec 11 09:12:41 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec 11 09:12:41 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4238676892' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 11 09:12:41 compute-0 angry_moser[75058]: 
Dec 11 09:12:41 compute-0 angry_moser[75058]: {
Dec 11 09:12:41 compute-0 angry_moser[75058]:     "fsid": "31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060",
Dec 11 09:12:41 compute-0 angry_moser[75058]:     "health": {
Dec 11 09:12:41 compute-0 angry_moser[75058]:         "status": "HEALTH_OK",
Dec 11 09:12:41 compute-0 angry_moser[75058]:         "checks": {},
Dec 11 09:12:41 compute-0 angry_moser[75058]:         "mutes": []
Dec 11 09:12:41 compute-0 angry_moser[75058]:     },
Dec 11 09:12:41 compute-0 angry_moser[75058]:     "election_epoch": 5,
Dec 11 09:12:41 compute-0 angry_moser[75058]:     "quorum": [
Dec 11 09:12:41 compute-0 angry_moser[75058]:         0
Dec 11 09:12:41 compute-0 angry_moser[75058]:     ],
Dec 11 09:12:41 compute-0 angry_moser[75058]:     "quorum_names": [
Dec 11 09:12:41 compute-0 angry_moser[75058]:         "compute-0"
Dec 11 09:12:41 compute-0 angry_moser[75058]:     ],
Dec 11 09:12:41 compute-0 angry_moser[75058]:     "quorum_age": 12,
Dec 11 09:12:41 compute-0 angry_moser[75058]:     "monmap": {
Dec 11 09:12:41 compute-0 angry_moser[75058]:         "epoch": 1,
Dec 11 09:12:41 compute-0 angry_moser[75058]:         "min_mon_release_name": "squid",
Dec 11 09:12:41 compute-0 angry_moser[75058]:         "num_mons": 1
Dec 11 09:12:41 compute-0 angry_moser[75058]:     },
Dec 11 09:12:41 compute-0 angry_moser[75058]:     "osdmap": {
Dec 11 09:12:41 compute-0 angry_moser[75058]:         "epoch": 1,
Dec 11 09:12:41 compute-0 angry_moser[75058]:         "num_osds": 0,
Dec 11 09:12:41 compute-0 angry_moser[75058]:         "num_up_osds": 0,
Dec 11 09:12:41 compute-0 angry_moser[75058]:         "osd_up_since": 0,
Dec 11 09:12:41 compute-0 angry_moser[75058]:         "num_in_osds": 0,
Dec 11 09:12:41 compute-0 angry_moser[75058]:         "osd_in_since": 0,
Dec 11 09:12:41 compute-0 angry_moser[75058]:         "num_remapped_pgs": 0
Dec 11 09:12:41 compute-0 angry_moser[75058]:     },
Dec 11 09:12:41 compute-0 angry_moser[75058]:     "pgmap": {
Dec 11 09:12:41 compute-0 angry_moser[75058]:         "pgs_by_state": [],
Dec 11 09:12:41 compute-0 angry_moser[75058]:         "num_pgs": 0,
Dec 11 09:12:41 compute-0 angry_moser[75058]:         "num_pools": 0,
Dec 11 09:12:41 compute-0 angry_moser[75058]:         "num_objects": 0,
Dec 11 09:12:41 compute-0 angry_moser[75058]:         "data_bytes": 0,
Dec 11 09:12:41 compute-0 angry_moser[75058]:         "bytes_used": 0,
Dec 11 09:12:41 compute-0 angry_moser[75058]:         "bytes_avail": 0,
Dec 11 09:12:41 compute-0 angry_moser[75058]:         "bytes_total": 0
Dec 11 09:12:41 compute-0 angry_moser[75058]:     },
Dec 11 09:12:41 compute-0 angry_moser[75058]:     "fsmap": {
Dec 11 09:12:41 compute-0 angry_moser[75058]:         "epoch": 1,
Dec 11 09:12:41 compute-0 angry_moser[75058]:         "btime": "2025-12-11T09:12:26:191373+0000",
Dec 11 09:12:41 compute-0 angry_moser[75058]:         "by_rank": [],
Dec 11 09:12:41 compute-0 angry_moser[75058]:         "up:standby": 0
Dec 11 09:12:41 compute-0 angry_moser[75058]:     },
Dec 11 09:12:41 compute-0 angry_moser[75058]:     "mgrmap": {
Dec 11 09:12:41 compute-0 angry_moser[75058]:         "available": true,
Dec 11 09:12:41 compute-0 angry_moser[75058]:         "num_standbys": 0,
Dec 11 09:12:41 compute-0 angry_moser[75058]:         "modules": [
Dec 11 09:12:41 compute-0 angry_moser[75058]:             "iostat",
Dec 11 09:12:41 compute-0 angry_moser[75058]:             "nfs",
Dec 11 09:12:41 compute-0 angry_moser[75058]:             "restful"
Dec 11 09:12:41 compute-0 angry_moser[75058]:         ],
Dec 11 09:12:41 compute-0 angry_moser[75058]:         "services": {}
Dec 11 09:12:41 compute-0 angry_moser[75058]:     },
Dec 11 09:12:41 compute-0 angry_moser[75058]:     "servicemap": {
Dec 11 09:12:41 compute-0 angry_moser[75058]:         "epoch": 1,
Dec 11 09:12:41 compute-0 angry_moser[75058]:         "modified": "2025-12-11T09:12:26.198210+0000",
Dec 11 09:12:41 compute-0 angry_moser[75058]:         "services": {}
Dec 11 09:12:41 compute-0 angry_moser[75058]:     },
Dec 11 09:12:41 compute-0 angry_moser[75058]:     "progress_events": {}
Dec 11 09:12:41 compute-0 angry_moser[75058]: }
Dec 11 09:12:41 compute-0 systemd[1]: libpod-207fbda41da6f259631d01a2591dfe3a860ba4ec219ca32daf5dc3cc90767cf8.scope: Deactivated successfully.
Dec 11 09:12:41 compute-0 conmon[75058]: conmon 207fbda41da6f259631d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-207fbda41da6f259631d01a2591dfe3a860ba4ec219ca32daf5dc3cc90767cf8.scope/container/memory.events
Dec 11 09:12:41 compute-0 podman[75043]: 2025-12-11 09:12:41.323152829 +0000 UTC m=+0.621747625 container died 207fbda41da6f259631d01a2591dfe3a860ba4ec219ca32daf5dc3cc90767cf8 (image=quay.io/ceph/ceph:v19, name=angry_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec 11 09:12:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-f1fd4995b8551a4d753f6b5620eb6068b7e91077703a988f16da2f5cd9165be8-merged.mount: Deactivated successfully.
Dec 11 09:12:41 compute-0 podman[75043]: 2025-12-11 09:12:41.358605578 +0000 UTC m=+0.657200374 container remove 207fbda41da6f259631d01a2591dfe3a860ba4ec219ca32daf5dc3cc90767cf8 (image=quay.io/ceph/ceph:v19, name=angry_moser, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Dec 11 09:12:41 compute-0 systemd[1]: libpod-conmon-207fbda41da6f259631d01a2591dfe3a860ba4ec219ca32daf5dc3cc90767cf8.scope: Deactivated successfully.
Dec 11 09:12:41 compute-0 podman[75098]: 2025-12-11 09:12:41.420279469 +0000 UTC m=+0.038561944 container create 01bb50cad562b0da52d3965c4c9e8fb84a83e46cabd321b2a240a1cd3ebf31f6 (image=quay.io/ceph/ceph:v19, name=dreamy_shannon, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec 11 09:12:41 compute-0 systemd[1]: Started libpod-conmon-01bb50cad562b0da52d3965c4c9e8fb84a83e46cabd321b2a240a1cd3ebf31f6.scope.
Dec 11 09:12:41 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:12:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/154ee78c8c48169eeba9b2c50b798c6e7fc7ef20b13ac30db2e8db5454433f75/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/154ee78c8c48169eeba9b2c50b798c6e7fc7ef20b13ac30db2e8db5454433f75/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/154ee78c8c48169eeba9b2c50b798c6e7fc7ef20b13ac30db2e8db5454433f75/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/154ee78c8c48169eeba9b2c50b798c6e7fc7ef20b13ac30db2e8db5454433f75/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:41 compute-0 podman[75098]: 2025-12-11 09:12:41.404547153 +0000 UTC m=+0.022829648 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:12:41 compute-0 podman[75098]: 2025-12-11 09:12:41.500949328 +0000 UTC m=+0.119231813 container init 01bb50cad562b0da52d3965c4c9e8fb84a83e46cabd321b2a240a1cd3ebf31f6 (image=quay.io/ceph/ceph:v19, name=dreamy_shannon, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:12:41 compute-0 podman[75098]: 2025-12-11 09:12:41.51127148 +0000 UTC m=+0.129553955 container start 01bb50cad562b0da52d3965c4c9e8fb84a83e46cabd321b2a240a1cd3ebf31f6 (image=quay.io/ceph/ceph:v19, name=dreamy_shannon, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 11 09:12:41 compute-0 podman[75098]: 2025-12-11 09:12:41.514619244 +0000 UTC m=+0.132901729 container attach 01bb50cad562b0da52d3965c4c9e8fb84a83e46cabd321b2a240a1cd3ebf31f6 (image=quay.io/ceph/ceph:v19, name=dreamy_shannon, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 11 09:12:41 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Dec 11 09:12:41 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/394521785' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec 11 09:12:41 compute-0 dreamy_shannon[75115]: 
Dec 11 09:12:41 compute-0 dreamy_shannon[75115]: [global]
Dec 11 09:12:41 compute-0 dreamy_shannon[75115]:         fsid = 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:12:41 compute-0 dreamy_shannon[75115]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Dec 11 09:12:41 compute-0 systemd[1]: libpod-01bb50cad562b0da52d3965c4c9e8fb84a83e46cabd321b2a240a1cd3ebf31f6.scope: Deactivated successfully.
Dec 11 09:12:41 compute-0 conmon[75115]: conmon 01bb50cad562b0da52d3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-01bb50cad562b0da52d3965c4c9e8fb84a83e46cabd321b2a240a1cd3ebf31f6.scope/container/memory.events
Dec 11 09:12:41 compute-0 podman[75098]: 2025-12-11 09:12:41.875295403 +0000 UTC m=+0.493577878 container died 01bb50cad562b0da52d3965c4c9e8fb84a83e46cabd321b2a240a1cd3ebf31f6 (image=quay.io/ceph/ceph:v19, name=dreamy_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec 11 09:12:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-154ee78c8c48169eeba9b2c50b798c6e7fc7ef20b13ac30db2e8db5454433f75-merged.mount: Deactivated successfully.
Dec 11 09:12:42 compute-0 podman[75098]: 2025-12-11 09:12:41.999702723 +0000 UTC m=+0.617985198 container remove 01bb50cad562b0da52d3965c4c9e8fb84a83e46cabd321b2a240a1cd3ebf31f6 (image=quay.io/ceph/ceph:v19, name=dreamy_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:12:42 compute-0 systemd[1]: libpod-conmon-01bb50cad562b0da52d3965c4c9e8fb84a83e46cabd321b2a240a1cd3ebf31f6.scope: Deactivated successfully.
Dec 11 09:12:42 compute-0 podman[75155]: 2025-12-11 09:12:42.08533953 +0000 UTC m=+0.061864719 container create b36a6906e4e19b88d108e87812ac555b14707e6144a2a70343eb8681f45389f4 (image=quay.io/ceph/ceph:v19, name=tender_jackson, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 11 09:12:42 compute-0 systemd[1]: Started libpod-conmon-b36a6906e4e19b88d108e87812ac555b14707e6144a2a70343eb8681f45389f4.scope.
Dec 11 09:12:42 compute-0 ceph-mgr[74715]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 11 09:12:42 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:12:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6db256f252d423a187a028dff56cd187a012a2bd5779ff596750199c473fd2a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6db256f252d423a187a028dff56cd187a012a2bd5779ff596750199c473fd2a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6db256f252d423a187a028dff56cd187a012a2bd5779ff596750199c473fd2a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:42 compute-0 podman[75155]: 2025-12-11 09:12:42.055701651 +0000 UTC m=+0.032226870 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:12:42 compute-0 podman[75155]: 2025-12-11 09:12:42.193437784 +0000 UTC m=+0.169962993 container init b36a6906e4e19b88d108e87812ac555b14707e6144a2a70343eb8681f45389f4 (image=quay.io/ceph/ceph:v19, name=tender_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:12:42 compute-0 podman[75155]: 2025-12-11 09:12:42.199388066 +0000 UTC m=+0.175913255 container start b36a6906e4e19b88d108e87812ac555b14707e6144a2a70343eb8681f45389f4 (image=quay.io/ceph/ceph:v19, name=tender_jackson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 11 09:12:42 compute-0 podman[75155]: 2025-12-11 09:12:42.303425911 +0000 UTC m=+0.279951130 container attach b36a6906e4e19b88d108e87812ac555b14707e6144a2a70343eb8681f45389f4 (image=quay.io/ceph/ceph:v19, name=tender_jackson, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 11 09:12:42 compute-0 ceph-mon[74426]: from='client.? 192.168.122.100:0/4238676892' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 11 09:12:42 compute-0 ceph-mon[74426]: from='client.? 192.168.122.100:0/394521785' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec 11 09:12:42 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0)
Dec 11 09:12:42 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2051671827' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Dec 11 09:12:43 compute-0 ceph-mon[74426]: from='client.? 192.168.122.100:0/2051671827' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Dec 11 09:12:43 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2051671827' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Dec 11 09:12:43 compute-0 ceph-mgr[74715]: mgr handle_mgr_map respawning because set of enabled modules changed!
Dec 11 09:12:43 compute-0 ceph-mgr[74715]: mgr respawn  e: '/usr/bin/ceph-mgr'
Dec 11 09:12:43 compute-0 ceph-mgr[74715]: mgr respawn  0: '/usr/bin/ceph-mgr'
Dec 11 09:12:43 compute-0 ceph-mgr[74715]: mgr respawn  1: '-n'
Dec 11 09:12:43 compute-0 ceph-mgr[74715]: mgr respawn  2: 'mgr.compute-0.wwpcae'
Dec 11 09:12:43 compute-0 ceph-mgr[74715]: mgr respawn  3: '-f'
Dec 11 09:12:43 compute-0 ceph-mgr[74715]: mgr respawn  4: '--setuser'
Dec 11 09:12:43 compute-0 ceph-mgr[74715]: mgr respawn  5: 'ceph'
Dec 11 09:12:43 compute-0 ceph-mgr[74715]: mgr respawn  6: '--setgroup'
Dec 11 09:12:43 compute-0 ceph-mgr[74715]: mgr respawn  7: 'ceph'
Dec 11 09:12:43 compute-0 ceph-mgr[74715]: mgr respawn  8: '--default-log-to-file=false'
Dec 11 09:12:43 compute-0 ceph-mgr[74715]: mgr respawn  9: '--default-log-to-journald=true'
Dec 11 09:12:43 compute-0 ceph-mgr[74715]: mgr respawn  10: '--default-log-to-stderr=false'
Dec 11 09:12:43 compute-0 ceph-mgr[74715]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Dec 11 09:12:43 compute-0 ceph-mgr[74715]: mgr respawn  exe_path /proc/self/exe
Dec 11 09:12:43 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.wwpcae(active, since 5s)
Dec 11 09:12:43 compute-0 systemd[1]: libpod-b36a6906e4e19b88d108e87812ac555b14707e6144a2a70343eb8681f45389f4.scope: Deactivated successfully.
Dec 11 09:12:43 compute-0 podman[75155]: 2025-12-11 09:12:43.459834765 +0000 UTC m=+1.436359974 container died b36a6906e4e19b88d108e87812ac555b14707e6144a2a70343eb8681f45389f4 (image=quay.io/ceph/ceph:v19, name=tender_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 11 09:12:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6db256f252d423a187a028dff56cd187a012a2bd5779ff596750199c473fd2a-merged.mount: Deactivated successfully.
Dec 11 09:12:43 compute-0 podman[75155]: 2025-12-11 09:12:43.497441276 +0000 UTC m=+1.473966465 container remove b36a6906e4e19b88d108e87812ac555b14707e6144a2a70343eb8681f45389f4 (image=quay.io/ceph/ceph:v19, name=tender_jackson, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 11 09:12:43 compute-0 systemd[1]: libpod-conmon-b36a6906e4e19b88d108e87812ac555b14707e6144a2a70343eb8681f45389f4.scope: Deactivated successfully.
Dec 11 09:12:43 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: ignoring --setuser ceph since I am not root
Dec 11 09:12:43 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: ignoring --setgroup ceph since I am not root
Dec 11 09:12:43 compute-0 ceph-mgr[74715]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec 11 09:12:43 compute-0 ceph-mgr[74715]: pidfile_write: ignore empty --pid-file
Dec 11 09:12:43 compute-0 podman[75208]: 2025-12-11 09:12:43.557280375 +0000 UTC m=+0.039666532 container create 82919505f05167dfa321a2acebbd4b396ec8ef60cd9db7f0711573331dab04b8 (image=quay.io/ceph/ceph:v19, name=elegant_kare, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 11 09:12:43 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'alerts'
Dec 11 09:12:43 compute-0 systemd[1]: Started libpod-conmon-82919505f05167dfa321a2acebbd4b396ec8ef60cd9db7f0711573331dab04b8.scope.
Dec 11 09:12:43 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:12:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8340bab600f9a1c2f8432868456ba00bf001507b54569b04b5dd3f15f9f8e40e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8340bab600f9a1c2f8432868456ba00bf001507b54569b04b5dd3f15f9f8e40e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8340bab600f9a1c2f8432868456ba00bf001507b54569b04b5dd3f15f9f8e40e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:43 compute-0 podman[75208]: 2025-12-11 09:12:43.63050208 +0000 UTC m=+0.112888247 container init 82919505f05167dfa321a2acebbd4b396ec8ef60cd9db7f0711573331dab04b8 (image=quay.io/ceph/ceph:v19, name=elegant_kare, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:12:43 compute-0 podman[75208]: 2025-12-11 09:12:43.635757469 +0000 UTC m=+0.118143636 container start 82919505f05167dfa321a2acebbd4b396ec8ef60cd9db7f0711573331dab04b8 (image=quay.io/ceph/ceph:v19, name=elegant_kare, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 11 09:12:43 compute-0 podman[75208]: 2025-12-11 09:12:43.64016764 +0000 UTC m=+0.122553807 container attach 82919505f05167dfa321a2acebbd4b396ec8ef60cd9db7f0711573331dab04b8 (image=quay.io/ceph/ceph:v19, name=elegant_kare, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 11 09:12:43 compute-0 podman[75208]: 2025-12-11 09:12:43.541076363 +0000 UTC m=+0.023462550 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:12:43 compute-0 ceph-mgr[74715]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 11 09:12:43 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'balancer'
Dec 11 09:12:43 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:12:43.686+0000 7f8d98560140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 11 09:12:43 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:12:43.793+0000 7f8d98560140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 11 09:12:43 compute-0 ceph-mgr[74715]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 11 09:12:43 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'cephadm'
Dec 11 09:12:44 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Dec 11 09:12:44 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3105282320' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec 11 09:12:44 compute-0 elegant_kare[75244]: {
Dec 11 09:12:44 compute-0 elegant_kare[75244]:     "epoch": 5,
Dec 11 09:12:44 compute-0 elegant_kare[75244]:     "available": true,
Dec 11 09:12:44 compute-0 elegant_kare[75244]:     "active_name": "compute-0.wwpcae",
Dec 11 09:12:44 compute-0 elegant_kare[75244]:     "num_standby": 0
Dec 11 09:12:44 compute-0 elegant_kare[75244]: }
Dec 11 09:12:44 compute-0 systemd[1]: libpod-82919505f05167dfa321a2acebbd4b396ec8ef60cd9db7f0711573331dab04b8.scope: Deactivated successfully.
Dec 11 09:12:44 compute-0 podman[75208]: 2025-12-11 09:12:44.087618266 +0000 UTC m=+0.570004433 container died 82919505f05167dfa321a2acebbd4b396ec8ef60cd9db7f0711573331dab04b8 (image=quay.io/ceph/ceph:v19, name=elegant_kare, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 11 09:12:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-8340bab600f9a1c2f8432868456ba00bf001507b54569b04b5dd3f15f9f8e40e-merged.mount: Deactivated successfully.
Dec 11 09:12:44 compute-0 podman[75208]: 2025-12-11 09:12:44.218112962 +0000 UTC m=+0.700499129 container remove 82919505f05167dfa321a2acebbd4b396ec8ef60cd9db7f0711573331dab04b8 (image=quay.io/ceph/ceph:v19, name=elegant_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec 11 09:12:44 compute-0 systemd[1]: libpod-conmon-82919505f05167dfa321a2acebbd4b396ec8ef60cd9db7f0711573331dab04b8.scope: Deactivated successfully.
Dec 11 09:12:44 compute-0 podman[75288]: 2025-12-11 09:12:44.286287185 +0000 UTC m=+0.046261676 container create f06762565aeeddb56f5f2b99301cbebc4965e91fc49e784f13da35668564546c (image=quay.io/ceph/ceph:v19, name=pedantic_solomon, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:12:44 compute-0 podman[75288]: 2025-12-11 09:12:44.265999134 +0000 UTC m=+0.025973655 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:12:44 compute-0 systemd[1]: Started libpod-conmon-f06762565aeeddb56f5f2b99301cbebc4965e91fc49e784f13da35668564546c.scope.
Dec 11 09:12:44 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:12:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6046f987d6e61e28d33df24f3d32a3e5e426f74439a27e702c613de793bb74f9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6046f987d6e61e28d33df24f3d32a3e5e426f74439a27e702c613de793bb74f9/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6046f987d6e61e28d33df24f3d32a3e5e426f74439a27e702c613de793bb74f9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:44 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'crash'
Dec 11 09:12:44 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:12:44.750+0000 7f8d98560140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 11 09:12:44 compute-0 ceph-mgr[74715]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 11 09:12:44 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'dashboard'
Dec 11 09:12:45 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'devicehealth'
Dec 11 09:12:45 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:12:45.415+0000 7f8d98560140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 11 09:12:45 compute-0 ceph-mgr[74715]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 11 09:12:45 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'diskprediction_local'
Dec 11 09:12:45 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 11 09:12:45 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 11 09:12:45 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]:   from numpy import show_config as show_numpy_config
Dec 11 09:12:45 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:12:45.621+0000 7f8d98560140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 11 09:12:45 compute-0 ceph-mgr[74715]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 11 09:12:45 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'influx'
Dec 11 09:12:45 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:12:45.692+0000 7f8d98560140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 11 09:12:45 compute-0 ceph-mgr[74715]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 11 09:12:45 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'insights'
Dec 11 09:12:45 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'iostat'
Dec 11 09:12:45 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:12:45.859+0000 7f8d98560140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 11 09:12:45 compute-0 ceph-mgr[74715]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 11 09:12:45 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'k8sevents'
Dec 11 09:12:45 compute-0 ceph-mon[74426]: from='client.? 192.168.122.100:0/2051671827' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Dec 11 09:12:45 compute-0 ceph-mon[74426]: mgrmap e5: compute-0.wwpcae(active, since 5s)
Dec 11 09:12:45 compute-0 ceph-mon[74426]: from='client.? 192.168.122.100:0/3105282320' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec 11 09:12:45 compute-0 podman[75288]: 2025-12-11 09:12:45.970204124 +0000 UTC m=+1.730178635 container init f06762565aeeddb56f5f2b99301cbebc4965e91fc49e784f13da35668564546c (image=quay.io/ceph/ceph:v19, name=pedantic_solomon, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default)
Dec 11 09:12:45 compute-0 podman[75288]: 2025-12-11 09:12:45.977992019 +0000 UTC m=+1.737966510 container start f06762565aeeddb56f5f2b99301cbebc4965e91fc49e784f13da35668564546c (image=quay.io/ceph/ceph:v19, name=pedantic_solomon, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:12:46 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'localpool'
Dec 11 09:12:46 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'mds_autoscaler'
Dec 11 09:12:46 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'mirroring'
Dec 11 09:12:46 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'nfs'
Dec 11 09:12:47 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:12:47.053+0000 7f8d98560140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 11 09:12:47 compute-0 ceph-mgr[74715]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 11 09:12:47 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'orchestrator'
Dec 11 09:12:47 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:12:47.313+0000 7f8d98560140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 11 09:12:47 compute-0 ceph-mgr[74715]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 11 09:12:47 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'osd_perf_query'
Dec 11 09:12:47 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:12:47.409+0000 7f8d98560140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 11 09:12:47 compute-0 ceph-mgr[74715]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 11 09:12:47 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'osd_support'
Dec 11 09:12:47 compute-0 podman[75288]: 2025-12-11 09:12:47.414816167 +0000 UTC m=+3.174790688 container attach f06762565aeeddb56f5f2b99301cbebc4965e91fc49e784f13da35668564546c (image=quay.io/ceph/ceph:v19, name=pedantic_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:12:47 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:12:47.483+0000 7f8d98560140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 11 09:12:47 compute-0 ceph-mgr[74715]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 11 09:12:47 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'pg_autoscaler'
Dec 11 09:12:47 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:12:47.576+0000 7f8d98560140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 11 09:12:47 compute-0 ceph-mgr[74715]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 11 09:12:47 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'progress'
Dec 11 09:12:47 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:12:47.668+0000 7f8d98560140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 11 09:12:47 compute-0 ceph-mgr[74715]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 11 09:12:47 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'prometheus'
Dec 11 09:12:48 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:12:48.083+0000 7f8d98560140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 11 09:12:48 compute-0 ceph-mgr[74715]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 11 09:12:48 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'rbd_support'
Dec 11 09:12:48 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:12:48.212+0000 7f8d98560140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 11 09:12:48 compute-0 ceph-mgr[74715]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 11 09:12:48 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'restful'
Dec 11 09:12:48 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'rgw'
Dec 11 09:12:48 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:12:48.781+0000 7f8d98560140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 11 09:12:48 compute-0 ceph-mgr[74715]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 11 09:12:48 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'rook'
Dec 11 09:12:49 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:12:49.451+0000 7f8d98560140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 11 09:12:49 compute-0 ceph-mgr[74715]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 11 09:12:49 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'selftest'
Dec 11 09:12:49 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:12:49.532+0000 7f8d98560140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 11 09:12:49 compute-0 ceph-mgr[74715]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 11 09:12:49 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'snap_schedule'
Dec 11 09:12:49 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:12:49.619+0000 7f8d98560140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 11 09:12:49 compute-0 ceph-mgr[74715]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 11 09:12:49 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'stats'
Dec 11 09:12:49 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'status'
Dec 11 09:12:49 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:12:49.791+0000 7f8d98560140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec 11 09:12:49 compute-0 ceph-mgr[74715]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec 11 09:12:49 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'telegraf'
Dec 11 09:12:49 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:12:49.872+0000 7f8d98560140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 11 09:12:49 compute-0 ceph-mgr[74715]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 11 09:12:49 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'telemetry'
Dec 11 09:12:50 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:12:50.057+0000 7f8d98560140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 11 09:12:50 compute-0 ceph-mgr[74715]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 11 09:12:50 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'test_orchestrator'
Dec 11 09:12:50 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:12:50.321+0000 7f8d98560140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 11 09:12:50 compute-0 ceph-mgr[74715]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 11 09:12:50 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'volumes'
Dec 11 09:12:50 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:12:50.639+0000 7f8d98560140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 11 09:12:50 compute-0 ceph-mgr[74715]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 11 09:12:50 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'zabbix'
Dec 11 09:12:50 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:12:50.716+0000 7f8d98560140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 11 09:12:50 compute-0 ceph-mgr[74715]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 11 09:12:50 compute-0 ceph-mon[74426]: log_channel(cluster) log [INF] : Active manager daemon compute-0.wwpcae restarted
Dec 11 09:12:50 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Dec 11 09:12:50 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 11 09:12:50 compute-0 ceph-mgr[74715]: ms_deliver_dispatch: unhandled message 0x563c57176d00 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec 11 09:12:50 compute-0 ceph-mon[74426]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.wwpcae
Dec 11 09:12:51 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Dec 11 09:12:51 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Dec 11 09:12:51 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Dec 11 09:12:51 compute-0 ceph-mgr[74715]: mgr handle_mgr_map Activating!
Dec 11 09:12:51 compute-0 ceph-mgr[74715]: mgr handle_mgr_map I am now activating
Dec 11 09:12:51 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Dec 11 09:12:51 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.wwpcae(active, starting, since 0.311812s)
Dec 11 09:12:51 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec 11 09:12:51 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 11 09:12:51 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.wwpcae", "id": "compute-0.wwpcae"} v 0)
Dec 11 09:12:51 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mgr metadata", "who": "compute-0.wwpcae", "id": "compute-0.wwpcae"}]: dispatch
Dec 11 09:12:51 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Dec 11 09:12:51 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec 11 09:12:51 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).mds e1 all = 1
Dec 11 09:12:51 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec 11 09:12:51 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 11 09:12:51 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Dec 11 09:12:51 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec 11 09:12:51 compute-0 ceph-mon[74426]: log_channel(cluster) log [INF] : Manager daemon compute-0.wwpcae is now available
Dec 11 09:12:51 compute-0 ceph-mgr[74715]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:12:51 compute-0 ceph-mgr[74715]: mgr load Constructed class from module: balancer
Dec 11 09:12:51 compute-0 ceph-mgr[74715]: [balancer INFO root] Starting
Dec 11 09:12:51 compute-0 ceph-mgr[74715]: [balancer INFO root] Optimize plan auto_2025-12-11_09:12:51
Dec 11 09:12:51 compute-0 ceph-mgr[74715]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 11 09:12:51 compute-0 ceph-mgr[74715]: [balancer INFO root] do_upmap
Dec 11 09:12:51 compute-0 ceph-mgr[74715]: [balancer INFO root] No pools available
Dec 11 09:12:51 compute-0 ceph-mgr[74715]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:12:51 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Dec 11 09:12:51 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Dec 11 09:12:51 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0)
Dec 11 09:12:51 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:12:51 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0)
Dec 11 09:12:51 compute-0 ceph-mon[74426]: Active manager daemon compute-0.wwpcae restarted
Dec 11 09:12:51 compute-0 ceph-mon[74426]: Activating manager daemon compute-0.wwpcae
Dec 11 09:12:51 compute-0 ceph-mon[74426]: osdmap e2: 0 total, 0 up, 0 in
Dec 11 09:12:51 compute-0 ceph-mon[74426]: mgrmap e6: compute-0.wwpcae(active, starting, since 0.311812s)
Dec 11 09:12:51 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 11 09:12:51 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mgr metadata", "who": "compute-0.wwpcae", "id": "compute-0.wwpcae"}]: dispatch
Dec 11 09:12:51 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec 11 09:12:51 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 11 09:12:51 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec 11 09:12:51 compute-0 ceph-mon[74426]: Manager daemon compute-0.wwpcae is now available
Dec 11 09:12:51 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:12:51 compute-0 ceph-mgr[74715]: mgr load Constructed class from module: cephadm
Dec 11 09:12:51 compute-0 ceph-mgr[74715]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:12:51 compute-0 ceph-mgr[74715]: mgr load Constructed class from module: crash
Dec 11 09:12:51 compute-0 ceph-mgr[74715]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:12:51 compute-0 ceph-mgr[74715]: mgr load Constructed class from module: devicehealth
Dec 11 09:12:51 compute-0 ceph-mgr[74715]: [devicehealth INFO root] Starting
Dec 11 09:12:51 compute-0 ceph-mgr[74715]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:12:51 compute-0 ceph-mgr[74715]: mgr load Constructed class from module: iostat
Dec 11 09:12:51 compute-0 ceph-mgr[74715]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:12:51 compute-0 ceph-mgr[74715]: mgr load Constructed class from module: nfs
Dec 11 09:12:51 compute-0 ceph-mgr[74715]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:12:51 compute-0 ceph-mgr[74715]: mgr load Constructed class from module: orchestrator
Dec 11 09:12:51 compute-0 ceph-mgr[74715]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:12:51 compute-0 ceph-mgr[74715]: mgr load Constructed class from module: pg_autoscaler
Dec 11 09:12:51 compute-0 ceph-mgr[74715]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:12:51 compute-0 ceph-mgr[74715]: mgr load Constructed class from module: progress
Dec 11 09:12:51 compute-0 ceph-mgr[74715]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:12:51 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec 11 09:12:51 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 11 09:12:51 compute-0 ceph-mgr[74715]: [progress INFO root] Loading...
Dec 11 09:12:51 compute-0 ceph-mgr[74715]: [progress INFO root] No stored events to load
Dec 11 09:12:51 compute-0 ceph-mgr[74715]: [progress INFO root] Loaded [] historic events
Dec 11 09:12:51 compute-0 ceph-mgr[74715]: [progress INFO root] Loaded OSDMap, ready.
Dec 11 09:12:51 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] _maybe_adjust
Dec 11 09:12:51 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec 11 09:12:51 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 11 09:12:51 compute-0 ceph-mgr[74715]: [rbd_support INFO root] recovery thread starting
Dec 11 09:12:51 compute-0 ceph-mgr[74715]: [rbd_support INFO root] starting setup
Dec 11 09:12:51 compute-0 ceph-mgr[74715]: mgr load Constructed class from module: rbd_support
Dec 11 09:12:51 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wwpcae/mirror_snapshot_schedule"} v 0)
Dec 11 09:12:51 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wwpcae/mirror_snapshot_schedule"}]: dispatch
Dec 11 09:12:51 compute-0 ceph-mgr[74715]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:12:51 compute-0 ceph-mgr[74715]: mgr load Constructed class from module: restful
Dec 11 09:12:51 compute-0 ceph-mgr[74715]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 11 09:12:51 compute-0 ceph-mgr[74715]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec 11 09:12:51 compute-0 ceph-mgr[74715]: [restful INFO root] server_addr: :: server_port: 8003
Dec 11 09:12:51 compute-0 ceph-mgr[74715]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:12:51 compute-0 ceph-mgr[74715]: mgr load Constructed class from module: status
Dec 11 09:12:51 compute-0 ceph-mgr[74715]: [rbd_support INFO root] PerfHandler: starting
Dec 11 09:12:51 compute-0 ceph-mgr[74715]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:12:51 compute-0 ceph-mgr[74715]: mgr load Constructed class from module: telemetry
Dec 11 09:12:51 compute-0 ceph-mgr[74715]: [rbd_support INFO root] TaskHandler: starting
Dec 11 09:12:51 compute-0 ceph-mgr[74715]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:12:51 compute-0 ceph-mgr[74715]: [restful WARNING root] server not running: no certificate configured
Dec 11 09:12:51 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wwpcae/trash_purge_schedule"} v 0)
Dec 11 09:12:51 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wwpcae/trash_purge_schedule"}]: dispatch
Dec 11 09:12:51 compute-0 ceph-mgr[74715]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 11 09:12:51 compute-0 ceph-mgr[74715]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec 11 09:12:51 compute-0 ceph-mgr[74715]: [rbd_support INFO root] setup complete
Dec 11 09:12:51 compute-0 ceph-mgr[74715]: mgr load Constructed class from module: volumes
Dec 11 09:12:51 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.agent_endpoint_root_cert}] v 0)
Dec 11 09:12:51 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:12:51 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.agent_endpoint_key}] v 0)
Dec 11 09:12:51 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:12:52 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.wwpcae(active, since 1.31965s)
Dec 11 09:12:52 compute-0 ceph-mgr[74715]: log_channel(audit) log [DBG] : from='client.14128 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Dec 11 09:12:52 compute-0 ceph-mgr[74715]: log_channel(audit) log [DBG] : from='client.14128 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Dec 11 09:12:52 compute-0 pedantic_solomon[75310]: {
Dec 11 09:12:52 compute-0 pedantic_solomon[75310]:     "mgrmap_epoch": 7,
Dec 11 09:12:52 compute-0 pedantic_solomon[75310]:     "initialized": true
Dec 11 09:12:52 compute-0 pedantic_solomon[75310]: }
Dec 11 09:12:52 compute-0 systemd[1]: libpod-f06762565aeeddb56f5f2b99301cbebc4965e91fc49e784f13da35668564546c.scope: Deactivated successfully.
Dec 11 09:12:52 compute-0 podman[75288]: 2025-12-11 09:12:52.077020207 +0000 UTC m=+7.836994718 container died f06762565aeeddb56f5f2b99301cbebc4965e91fc49e784f13da35668564546c (image=quay.io/ceph/ceph:v19, name=pedantic_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True)
Dec 11 09:12:52 compute-0 ceph-mon[74426]: Found migration_current of "None". Setting to last migration.
Dec 11 09:12:52 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:12:52 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:12:52 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 11 09:12:52 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 11 09:12:52 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wwpcae/mirror_snapshot_schedule"}]: dispatch
Dec 11 09:12:52 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wwpcae/trash_purge_schedule"}]: dispatch
Dec 11 09:12:52 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:12:52 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:12:52 compute-0 ceph-mon[74426]: mgrmap e7: compute-0.wwpcae(active, since 1.31965s)
Dec 11 09:12:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-6046f987d6e61e28d33df24f3d32a3e5e426f74439a27e702c613de793bb74f9-merged.mount: Deactivated successfully.
Dec 11 09:12:52 compute-0 podman[75288]: 2025-12-11 09:12:52.127122114 +0000 UTC m=+7.887096605 container remove f06762565aeeddb56f5f2b99301cbebc4965e91fc49e784f13da35668564546c (image=quay.io/ceph/ceph:v19, name=pedantic_solomon, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True)
Dec 11 09:12:52 compute-0 systemd[1]: libpod-conmon-f06762565aeeddb56f5f2b99301cbebc4965e91fc49e784f13da35668564546c.scope: Deactivated successfully.
Dec 11 09:12:52 compute-0 podman[75461]: 2025-12-11 09:12:52.206167157 +0000 UTC m=+0.047538550 container create 0c7685774ac2336c271f307da292c5a4078a68eea270114c599fe0b0ae2a43c3 (image=quay.io/ceph/ceph:v19, name=flamboyant_elbakyan, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:12:52 compute-0 podman[75461]: 2025-12-11 09:12:52.189043374 +0000 UTC m=+0.030414787 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:12:52 compute-0 systemd[1]: Started libpod-conmon-0c7685774ac2336c271f307da292c5a4078a68eea270114c599fe0b0ae2a43c3.scope.
Dec 11 09:12:52 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:12:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/133d6bd200ba60dc9321765827041c860b2f26af56670f38747ed42dc492a86b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/133d6bd200ba60dc9321765827041c860b2f26af56670f38747ed42dc492a86b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/133d6bd200ba60dc9321765827041c860b2f26af56670f38747ed42dc492a86b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:52 compute-0 podman[75461]: 2025-12-11 09:12:52.384629318 +0000 UTC m=+0.226000731 container init 0c7685774ac2336c271f307da292c5a4078a68eea270114c599fe0b0ae2a43c3 (image=quay.io/ceph/ceph:v19, name=flamboyant_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:12:52 compute-0 podman[75461]: 2025-12-11 09:12:52.3917179 +0000 UTC m=+0.233089293 container start 0c7685774ac2336c271f307da292c5a4078a68eea270114c599fe0b0ae2a43c3 (image=quay.io/ceph/ceph:v19, name=flamboyant_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 11 09:12:52 compute-0 podman[75461]: 2025-12-11 09:12:52.395589431 +0000 UTC m=+0.236960854 container attach 0c7685774ac2336c271f307da292c5a4078a68eea270114c599fe0b0ae2a43c3 (image=quay.io/ceph/ceph:v19, name=flamboyant_elbakyan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 11 09:12:52 compute-0 ceph-mgr[74715]: [cephadm INFO cherrypy.error] [11/Dec/2025:09:12:52] ENGINE Bus STARTING
Dec 11 09:12:52 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : [11/Dec/2025:09:12:52] ENGINE Bus STARTING
Dec 11 09:12:52 compute-0 ceph-mgr[74715]: [cephadm INFO cherrypy.error] [11/Dec/2025:09:12:52] ENGINE Serving on http://192.168.122.100:8765
Dec 11 09:12:52 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : [11/Dec/2025:09:12:52] ENGINE Serving on http://192.168.122.100:8765
Dec 11 09:12:52 compute-0 ceph-mgr[74715]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Dec 11 09:12:52 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0)
Dec 11 09:12:52 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:12:52 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec 11 09:12:52 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 11 09:12:52 compute-0 systemd[1]: libpod-0c7685774ac2336c271f307da292c5a4078a68eea270114c599fe0b0ae2a43c3.scope: Deactivated successfully.
Dec 11 09:12:52 compute-0 podman[75461]: 2025-12-11 09:12:52.828861365 +0000 UTC m=+0.670232758 container died 0c7685774ac2336c271f307da292c5a4078a68eea270114c599fe0b0ae2a43c3 (image=quay.io/ceph/ceph:v19, name=flamboyant_elbakyan, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec 11 09:12:52 compute-0 ceph-mgr[74715]: [cephadm INFO cherrypy.error] [11/Dec/2025:09:12:52] ENGINE Serving on https://192.168.122.100:7150
Dec 11 09:12:52 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : [11/Dec/2025:09:12:52] ENGINE Serving on https://192.168.122.100:7150
Dec 11 09:12:52 compute-0 ceph-mgr[74715]: [cephadm INFO cherrypy.error] [11/Dec/2025:09:12:52] ENGINE Bus STARTED
Dec 11 09:12:52 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : [11/Dec/2025:09:12:52] ENGINE Bus STARTED
Dec 11 09:12:52 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec 11 09:12:52 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 11 09:12:52 compute-0 ceph-mgr[74715]: [cephadm INFO cherrypy.error] [11/Dec/2025:09:12:52] ENGINE Client ('192.168.122.100', 36414) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 11 09:12:52 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : [11/Dec/2025:09:12:52] ENGINE Client ('192.168.122.100', 36414) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 11 09:12:53 compute-0 ceph-mgr[74715]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 11 09:12:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-133d6bd200ba60dc9321765827041c860b2f26af56670f38747ed42dc492a86b-merged.mount: Deactivated successfully.
Dec 11 09:12:53 compute-0 podman[75461]: 2025-12-11 09:12:53.089837527 +0000 UTC m=+0.931208920 container remove 0c7685774ac2336c271f307da292c5a4078a68eea270114c599fe0b0ae2a43c3 (image=quay.io/ceph/ceph:v19, name=flamboyant_elbakyan, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec 11 09:12:53 compute-0 systemd[1]: libpod-conmon-0c7685774ac2336c271f307da292c5a4078a68eea270114c599fe0b0ae2a43c3.scope: Deactivated successfully.
Dec 11 09:12:53 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.wwpcae(active, since 2s)
Dec 11 09:12:53 compute-0 ceph-mon[74426]: from='client.14128 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Dec 11 09:12:53 compute-0 ceph-mon[74426]: from='client.14128 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Dec 11 09:12:53 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:12:53 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 11 09:12:53 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 11 09:12:53 compute-0 podman[75540]: 2025-12-11 09:12:53.167719602 +0000 UTC m=+0.051168455 container create 303848006dfcbb10ec34772d05b51e65650617419f295ba2cdbaee0c623e4635 (image=quay.io/ceph/ceph:v19, name=infallible_mclean, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 11 09:12:53 compute-0 podman[75540]: 2025-12-11 09:12:53.143718954 +0000 UTC m=+0.027167827 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:12:53 compute-0 systemd[1]: Started libpod-conmon-303848006dfcbb10ec34772d05b51e65650617419f295ba2cdbaee0c623e4635.scope.
Dec 11 09:12:53 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:12:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a89bf31fcf629aac87c538200fa066b07e8a91245ba8d8a9ed604554a76b593e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a89bf31fcf629aac87c538200fa066b07e8a91245ba8d8a9ed604554a76b593e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a89bf31fcf629aac87c538200fa066b07e8a91245ba8d8a9ed604554a76b593e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:53 compute-0 podman[75540]: 2025-12-11 09:12:53.309918647 +0000 UTC m=+0.193367520 container init 303848006dfcbb10ec34772d05b51e65650617419f295ba2cdbaee0c623e4635 (image=quay.io/ceph/ceph:v19, name=infallible_mclean, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 11 09:12:53 compute-0 podman[75540]: 2025-12-11 09:12:53.316607274 +0000 UTC m=+0.200056127 container start 303848006dfcbb10ec34772d05b51e65650617419f295ba2cdbaee0c623e4635 (image=quay.io/ceph/ceph:v19, name=infallible_mclean, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:12:53 compute-0 podman[75540]: 2025-12-11 09:12:53.320397104 +0000 UTC m=+0.203845977 container attach 303848006dfcbb10ec34772d05b51e65650617419f295ba2cdbaee0c623e4635 (image=quay.io/ceph/ceph:v19, name=infallible_mclean, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Dec 11 09:12:53 compute-0 ceph-mgr[74715]: log_channel(audit) log [DBG] : from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 11 09:12:53 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0)
Dec 11 09:12:53 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:12:53 compute-0 ceph-mgr[74715]: [cephadm INFO root] Set ssh ssh_user
Dec 11 09:12:53 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Dec 11 09:12:53 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0)
Dec 11 09:12:53 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:12:53 compute-0 ceph-mgr[74715]: [cephadm INFO root] Set ssh ssh_config
Dec 11 09:12:53 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Dec 11 09:12:53 compute-0 ceph-mgr[74715]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Dec 11 09:12:53 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Dec 11 09:12:53 compute-0 infallible_mclean[75556]: ssh user set to ceph-admin. sudo will be used
Dec 11 09:12:53 compute-0 systemd[1]: libpod-303848006dfcbb10ec34772d05b51e65650617419f295ba2cdbaee0c623e4635.scope: Deactivated successfully.
Dec 11 09:12:53 compute-0 podman[75540]: 2025-12-11 09:12:53.754997263 +0000 UTC m=+0.638446116 container died 303848006dfcbb10ec34772d05b51e65650617419f295ba2cdbaee0c623e4635 (image=quay.io/ceph/ceph:v19, name=infallible_mclean, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:12:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-a89bf31fcf629aac87c538200fa066b07e8a91245ba8d8a9ed604554a76b593e-merged.mount: Deactivated successfully.
Dec 11 09:12:53 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019919813 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:12:54 compute-0 podman[75540]: 2025-12-11 09:12:54.017691744 +0000 UTC m=+0.901140597 container remove 303848006dfcbb10ec34772d05b51e65650617419f295ba2cdbaee0c623e4635 (image=quay.io/ceph/ceph:v19, name=infallible_mclean, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:12:54 compute-0 systemd[1]: libpod-conmon-303848006dfcbb10ec34772d05b51e65650617419f295ba2cdbaee0c623e4635.scope: Deactivated successfully.
Dec 11 09:12:54 compute-0 podman[75592]: 2025-12-11 09:12:54.095001277 +0000 UTC m=+0.054078443 container create 14d77edbdd92228d23a0394987defdb16325ad8a8b73fdbdc05eb325dbc86afa (image=quay.io/ceph/ceph:v19, name=flamboyant_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:12:54 compute-0 systemd[1]: Started libpod-conmon-14d77edbdd92228d23a0394987defdb16325ad8a8b73fdbdc05eb325dbc86afa.scope.
Dec 11 09:12:54 compute-0 ceph-mon[74426]: [11/Dec/2025:09:12:52] ENGINE Bus STARTING
Dec 11 09:12:54 compute-0 ceph-mon[74426]: [11/Dec/2025:09:12:52] ENGINE Serving on http://192.168.122.100:8765
Dec 11 09:12:54 compute-0 ceph-mon[74426]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Dec 11 09:12:54 compute-0 ceph-mon[74426]: [11/Dec/2025:09:12:52] ENGINE Serving on https://192.168.122.100:7150
Dec 11 09:12:54 compute-0 ceph-mon[74426]: [11/Dec/2025:09:12:52] ENGINE Bus STARTED
Dec 11 09:12:54 compute-0 ceph-mon[74426]: [11/Dec/2025:09:12:52] ENGINE Client ('192.168.122.100', 36414) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 11 09:12:54 compute-0 ceph-mon[74426]: mgrmap e8: compute-0.wwpcae(active, since 2s)
Dec 11 09:12:54 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:12:54 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:12:54 compute-0 podman[75592]: 2025-12-11 09:12:54.065666788 +0000 UTC m=+0.024743974 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:12:54 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:12:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f86234539c6dab7c60593f2de95db65a9006c6989a1e120bce57924bd7f4a946/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f86234539c6dab7c60593f2de95db65a9006c6989a1e120bce57924bd7f4a946/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f86234539c6dab7c60593f2de95db65a9006c6989a1e120bce57924bd7f4a946/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f86234539c6dab7c60593f2de95db65a9006c6989a1e120bce57924bd7f4a946/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f86234539c6dab7c60593f2de95db65a9006c6989a1e120bce57924bd7f4a946/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:54 compute-0 podman[75592]: 2025-12-11 09:12:54.185945697 +0000 UTC m=+0.145022883 container init 14d77edbdd92228d23a0394987defdb16325ad8a8b73fdbdc05eb325dbc86afa (image=quay.io/ceph/ceph:v19, name=flamboyant_thompson, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Dec 11 09:12:54 compute-0 podman[75592]: 2025-12-11 09:12:54.19220895 +0000 UTC m=+0.151286116 container start 14d77edbdd92228d23a0394987defdb16325ad8a8b73fdbdc05eb325dbc86afa (image=quay.io/ceph/ceph:v19, name=flamboyant_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:12:54 compute-0 podman[75592]: 2025-12-11 09:12:54.200574425 +0000 UTC m=+0.159651601 container attach 14d77edbdd92228d23a0394987defdb16325ad8a8b73fdbdc05eb325dbc86afa (image=quay.io/ceph/ceph:v19, name=flamboyant_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 11 09:12:54 compute-0 ceph-mgr[74715]: log_channel(audit) log [DBG] : from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 11 09:12:54 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0)
Dec 11 09:12:54 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:12:54 compute-0 ceph-mgr[74715]: [cephadm INFO root] Set ssh ssh_identity_key
Dec 11 09:12:54 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Dec 11 09:12:54 compute-0 ceph-mgr[74715]: [cephadm INFO root] Set ssh private key
Dec 11 09:12:54 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Set ssh private key
Dec 11 09:12:54 compute-0 systemd[1]: libpod-14d77edbdd92228d23a0394987defdb16325ad8a8b73fdbdc05eb325dbc86afa.scope: Deactivated successfully.
Dec 11 09:12:54 compute-0 podman[75592]: 2025-12-11 09:12:54.570054495 +0000 UTC m=+0.529131661 container died 14d77edbdd92228d23a0394987defdb16325ad8a8b73fdbdc05eb325dbc86afa (image=quay.io/ceph/ceph:v19, name=flamboyant_thompson, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:12:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-f86234539c6dab7c60593f2de95db65a9006c6989a1e120bce57924bd7f4a946-merged.mount: Deactivated successfully.
Dec 11 09:12:54 compute-0 podman[75592]: 2025-12-11 09:12:54.611618251 +0000 UTC m=+0.570695417 container remove 14d77edbdd92228d23a0394987defdb16325ad8a8b73fdbdc05eb325dbc86afa (image=quay.io/ceph/ceph:v19, name=flamboyant_thompson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:12:54 compute-0 systemd[1]: libpod-conmon-14d77edbdd92228d23a0394987defdb16325ad8a8b73fdbdc05eb325dbc86afa.scope: Deactivated successfully.
Dec 11 09:12:54 compute-0 podman[75645]: 2025-12-11 09:12:54.680649483 +0000 UTC m=+0.045783830 container create 7979c9ff257b53d5064b44c95139caa73820e9abe86bc84ac4a4d7a1c5042e69 (image=quay.io/ceph/ceph:v19, name=loving_germain, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec 11 09:12:54 compute-0 systemd[1]: Started libpod-conmon-7979c9ff257b53d5064b44c95139caa73820e9abe86bc84ac4a4d7a1c5042e69.scope.
Dec 11 09:12:54 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:12:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17c5334555edce5599b57cd78c49b7d10e620aa8d8861047cfc844d3181990fc/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17c5334555edce5599b57cd78c49b7d10e620aa8d8861047cfc844d3181990fc/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17c5334555edce5599b57cd78c49b7d10e620aa8d8861047cfc844d3181990fc/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17c5334555edce5599b57cd78c49b7d10e620aa8d8861047cfc844d3181990fc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17c5334555edce5599b57cd78c49b7d10e620aa8d8861047cfc844d3181990fc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:54 compute-0 podman[75645]: 2025-12-11 09:12:54.662213325 +0000 UTC m=+0.027347692 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:12:54 compute-0 podman[75645]: 2025-12-11 09:12:54.770677521 +0000 UTC m=+0.135811888 container init 7979c9ff257b53d5064b44c95139caa73820e9abe86bc84ac4a4d7a1c5042e69 (image=quay.io/ceph/ceph:v19, name=loving_germain, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:12:54 compute-0 podman[75645]: 2025-12-11 09:12:54.776461278 +0000 UTC m=+0.141595625 container start 7979c9ff257b53d5064b44c95139caa73820e9abe86bc84ac4a4d7a1c5042e69 (image=quay.io/ceph/ceph:v19, name=loving_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:12:54 compute-0 podman[75645]: 2025-12-11 09:12:54.780373272 +0000 UTC m=+0.145507629 container attach 7979c9ff257b53d5064b44c95139caa73820e9abe86bc84ac4a4d7a1c5042e69 (image=quay.io/ceph/ceph:v19, name=loving_germain, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 11 09:12:55 compute-0 ceph-mgr[74715]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 11 09:12:55 compute-0 ceph-mgr[74715]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 11 09:12:55 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0)
Dec 11 09:12:55 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:12:55 compute-0 ceph-mgr[74715]: [cephadm INFO root] Set ssh ssh_identity_pub
Dec 11 09:12:55 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Dec 11 09:12:55 compute-0 systemd[1]: libpod-7979c9ff257b53d5064b44c95139caa73820e9abe86bc84ac4a4d7a1c5042e69.scope: Deactivated successfully.
Dec 11 09:12:55 compute-0 podman[75645]: 2025-12-11 09:12:55.252799119 +0000 UTC m=+0.617933466 container died 7979c9ff257b53d5064b44c95139caa73820e9abe86bc84ac4a4d7a1c5042e69 (image=quay.io/ceph/ceph:v19, name=loving_germain, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec 11 09:12:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-17c5334555edce5599b57cd78c49b7d10e620aa8d8861047cfc844d3181990fc-merged.mount: Deactivated successfully.
Dec 11 09:12:55 compute-0 podman[75645]: 2025-12-11 09:12:55.292890115 +0000 UTC m=+0.658024472 container remove 7979c9ff257b53d5064b44c95139caa73820e9abe86bc84ac4a4d7a1c5042e69 (image=quay.io/ceph/ceph:v19, name=loving_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Dec 11 09:12:55 compute-0 systemd[1]: libpod-conmon-7979c9ff257b53d5064b44c95139caa73820e9abe86bc84ac4a4d7a1c5042e69.scope: Deactivated successfully.
Dec 11 09:12:55 compute-0 podman[75699]: 2025-12-11 09:12:55.350721665 +0000 UTC m=+0.038696229 container create 5d8b56b9fde8628a71e5750ee3caec19a11b68ebc81f83a0977f0d64432cc1a1 (image=quay.io/ceph/ceph:v19, name=upbeat_hellman, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 11 09:12:55 compute-0 systemd[1]: Started libpod-conmon-5d8b56b9fde8628a71e5750ee3caec19a11b68ebc81f83a0977f0d64432cc1a1.scope.
Dec 11 09:12:55 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:12:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/965a3bdffce34fde316b9ccd21ebcbdbbe8f332b8fa6d18c5d7a4bb9b603571c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/965a3bdffce34fde316b9ccd21ebcbdbbe8f332b8fa6d18c5d7a4bb9b603571c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/965a3bdffce34fde316b9ccd21ebcbdbbe8f332b8fa6d18c5d7a4bb9b603571c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:55 compute-0 podman[75699]: 2025-12-11 09:12:55.417634795 +0000 UTC m=+0.105609369 container init 5d8b56b9fde8628a71e5750ee3caec19a11b68ebc81f83a0977f0d64432cc1a1 (image=quay.io/ceph/ceph:v19, name=upbeat_hellman, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:12:55 compute-0 podman[75699]: 2025-12-11 09:12:55.422875224 +0000 UTC m=+0.110849788 container start 5d8b56b9fde8628a71e5750ee3caec19a11b68ebc81f83a0977f0d64432cc1a1 (image=quay.io/ceph/ceph:v19, name=upbeat_hellman, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 11 09:12:55 compute-0 podman[75699]: 2025-12-11 09:12:55.427552503 +0000 UTC m=+0.115527087 container attach 5d8b56b9fde8628a71e5750ee3caec19a11b68ebc81f83a0977f0d64432cc1a1 (image=quay.io/ceph/ceph:v19, name=upbeat_hellman, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 11 09:12:55 compute-0 podman[75699]: 2025-12-11 09:12:55.333594832 +0000 UTC m=+0.021569416 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:12:55 compute-0 ceph-mon[74426]: from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 11 09:12:55 compute-0 ceph-mon[74426]: Set ssh ssh_user
Dec 11 09:12:55 compute-0 ceph-mon[74426]: Set ssh ssh_config
Dec 11 09:12:55 compute-0 ceph-mon[74426]: ssh user set to ceph-admin. sudo will be used
Dec 11 09:12:55 compute-0 ceph-mon[74426]: from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 11 09:12:55 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:12:55 compute-0 ceph-mon[74426]: Set ssh ssh_identity_key
Dec 11 09:12:55 compute-0 ceph-mon[74426]: Set ssh private key
Dec 11 09:12:55 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:12:55 compute-0 ceph-mgr[74715]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 11 09:12:55 compute-0 upbeat_hellman[75716]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCxW5/xxYgt5dTq5mnmtiNa6QL9LLMtn128EXpo/V2HqhiTYGDeu5S/DNiGsd362jYCTGvElFUnDqwXetP0OUsxMWjaxWQxyytiKu/hoc1vb+DMJdhwCIg9fgV016FgP3juFY11ci/WsdAEr4evHDuatR1ocXIFRS8A0BxnMMOac9+H+LA0Iiqa0F+Rqto63anoKsgRwhqHT25rCtSJka77bCPU40DsRmcNE9Cx+xX4JWCcpFOOOTv3l5z/yImggLvsMrrtwNOlR6bYTiNBccXWt7ObXzoxm3g8GXqRRM4vsORDNFdoaXTOMQC022Ag8CcMeHZdfEh73rggls73wfzVHBrPL/vyziu+4HtqdZlW0PA0iRb851EECXP+fINGd/SgY02LShH4YZrEjetUjzdbAuNVVJyanDXnTB7KmuVV/8ZimLXrYWMEOv1NoyZ1Ls2TkbfGkR077gCRkstY+noz+zEX7HMbQMqDJYcd/cGNBD19Jc+zQlFv6RnwtLANlR0= zuul@controller
Dec 11 09:12:55 compute-0 systemd[1]: libpod-5d8b56b9fde8628a71e5750ee3caec19a11b68ebc81f83a0977f0d64432cc1a1.scope: Deactivated successfully.
Dec 11 09:12:55 compute-0 podman[75699]: 2025-12-11 09:12:55.851689365 +0000 UTC m=+0.539663929 container died 5d8b56b9fde8628a71e5750ee3caec19a11b68ebc81f83a0977f0d64432cc1a1 (image=quay.io/ceph/ceph:v19, name=upbeat_hellman, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:12:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-965a3bdffce34fde316b9ccd21ebcbdbbe8f332b8fa6d18c5d7a4bb9b603571c-merged.mount: Deactivated successfully.
Dec 11 09:12:55 compute-0 podman[75699]: 2025-12-11 09:12:55.891050427 +0000 UTC m=+0.579024991 container remove 5d8b56b9fde8628a71e5750ee3caec19a11b68ebc81f83a0977f0d64432cc1a1 (image=quay.io/ceph/ceph:v19, name=upbeat_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 11 09:12:56 compute-0 systemd[1]: libpod-conmon-5d8b56b9fde8628a71e5750ee3caec19a11b68ebc81f83a0977f0d64432cc1a1.scope: Deactivated successfully.
Dec 11 09:12:56 compute-0 podman[75749]: 2025-12-11 09:12:56.065524012 +0000 UTC m=+0.045261864 container create e0af0b24f29fdb4f98686a841da32f603444926527fc50ce1af6f8d13800ab37 (image=quay.io/ceph/ceph:v19, name=happy_lederberg, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 11 09:12:56 compute-0 systemd[1]: Started libpod-conmon-e0af0b24f29fdb4f98686a841da32f603444926527fc50ce1af6f8d13800ab37.scope.
Dec 11 09:12:56 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:12:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/046b63c1b75dfbb2c41cb9e28d69fe20de7928cc7ea91c103b087fab022a1e1c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/046b63c1b75dfbb2c41cb9e28d69fe20de7928cc7ea91c103b087fab022a1e1c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/046b63c1b75dfbb2c41cb9e28d69fe20de7928cc7ea91c103b087fab022a1e1c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:12:56 compute-0 podman[75749]: 2025-12-11 09:12:56.047106764 +0000 UTC m=+0.026844636 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:12:56 compute-0 podman[75749]: 2025-12-11 09:12:56.154025447 +0000 UTC m=+0.133763319 container init e0af0b24f29fdb4f98686a841da32f603444926527fc50ce1af6f8d13800ab37 (image=quay.io/ceph/ceph:v19, name=happy_lederberg, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec 11 09:12:56 compute-0 podman[75749]: 2025-12-11 09:12:56.159890537 +0000 UTC m=+0.139628389 container start e0af0b24f29fdb4f98686a841da32f603444926527fc50ce1af6f8d13800ab37 (image=quay.io/ceph/ceph:v19, name=happy_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 11 09:12:56 compute-0 podman[75749]: 2025-12-11 09:12:56.163619374 +0000 UTC m=+0.143357226 container attach e0af0b24f29fdb4f98686a841da32f603444926527fc50ce1af6f8d13800ab37 (image=quay.io/ceph/ceph:v19, name=happy_lederberg, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec 11 09:12:56 compute-0 ceph-mgr[74715]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Dec 11 09:12:56 compute-0 ceph-mon[74426]: from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 11 09:12:56 compute-0 ceph-mon[74426]: Set ssh ssh_identity_pub
Dec 11 09:12:56 compute-0 ceph-mon[74426]: from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 11 09:12:56 compute-0 ceph-mon[74426]: from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Dec 11 09:12:56 compute-0 sshd-session[75796]: Accepted publickey for ceph-admin from 192.168.122.100 port 34240 ssh2: RSA SHA256:VeIx2NZka5hi0niQjHCLie+FE2InrWghbFhBMbpMPGo
Dec 11 09:12:56 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Dec 11 09:12:56 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Dec 11 09:12:56 compute-0 systemd-logind[792]: New session 21 of user ceph-admin.
Dec 11 09:12:56 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Dec 11 09:12:56 compute-0 systemd[1]: Starting User Manager for UID 42477...
Dec 11 09:12:56 compute-0 systemd[75800]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 11 09:12:56 compute-0 sshd-session[75809]: Accepted publickey for ceph-admin from 192.168.122.100 port 34256 ssh2: RSA SHA256:VeIx2NZka5hi0niQjHCLie+FE2InrWghbFhBMbpMPGo
Dec 11 09:12:56 compute-0 systemd[75800]: Queued start job for default target Main User Target.
Dec 11 09:12:56 compute-0 systemd-logind[792]: New session 23 of user ceph-admin.
Dec 11 09:12:56 compute-0 systemd[75800]: Created slice User Application Slice.
Dec 11 09:12:56 compute-0 systemd[75800]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 11 09:12:56 compute-0 systemd[75800]: Started Daily Cleanup of User's Temporary Directories.
Dec 11 09:12:56 compute-0 systemd[75800]: Reached target Paths.
Dec 11 09:12:56 compute-0 systemd[75800]: Reached target Timers.
Dec 11 09:12:56 compute-0 systemd[75800]: Starting D-Bus User Message Bus Socket...
Dec 11 09:12:56 compute-0 systemd[75800]: Starting Create User's Volatile Files and Directories...
Dec 11 09:12:56 compute-0 systemd[75800]: Listening on D-Bus User Message Bus Socket.
Dec 11 09:12:56 compute-0 systemd[75800]: Reached target Sockets.
Dec 11 09:12:56 compute-0 systemd[75800]: Finished Create User's Volatile Files and Directories.
Dec 11 09:12:56 compute-0 systemd[75800]: Reached target Basic System.
Dec 11 09:12:56 compute-0 systemd[75800]: Reached target Main User Target.
Dec 11 09:12:56 compute-0 systemd[75800]: Startup finished in 139ms.
Dec 11 09:12:56 compute-0 systemd[1]: Started User Manager for UID 42477.
Dec 11 09:12:56 compute-0 systemd[1]: Started Session 21 of User ceph-admin.
Dec 11 09:12:56 compute-0 systemd[1]: Started Session 23 of User ceph-admin.
Dec 11 09:12:57 compute-0 sshd-session[75796]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 11 09:12:57 compute-0 sshd-session[75809]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 11 09:12:57 compute-0 ceph-mgr[74715]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 11 09:12:57 compute-0 sudo[75821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:12:57 compute-0 sudo[75821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:12:57 compute-0 sudo[75821]: pam_unix(sudo:session): session closed for user root
Dec 11 09:12:57 compute-0 sshd-session[75846]: Accepted publickey for ceph-admin from 192.168.122.100 port 34258 ssh2: RSA SHA256:VeIx2NZka5hi0niQjHCLie+FE2InrWghbFhBMbpMPGo
Dec 11 09:12:57 compute-0 systemd-logind[792]: New session 24 of user ceph-admin.
Dec 11 09:12:57 compute-0 systemd[1]: Started Session 24 of User ceph-admin.
Dec 11 09:12:57 compute-0 sshd-session[75846]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 11 09:12:57 compute-0 sudo[75850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host --expect-hostname compute-0
Dec 11 09:12:57 compute-0 sudo[75850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:12:57 compute-0 sudo[75850]: pam_unix(sudo:session): session closed for user root
Dec 11 09:12:57 compute-0 sshd-session[75875]: Accepted publickey for ceph-admin from 192.168.122.100 port 34262 ssh2: RSA SHA256:VeIx2NZka5hi0niQjHCLie+FE2InrWghbFhBMbpMPGo
Dec 11 09:12:57 compute-0 systemd-logind[792]: New session 25 of user ceph-admin.
Dec 11 09:12:57 compute-0 systemd[1]: Started Session 25 of User ceph-admin.
Dec 11 09:12:57 compute-0 sshd-session[75875]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 11 09:12:57 compute-0 sudo[75879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36
Dec 11 09:12:57 compute-0 sudo[75879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:12:57 compute-0 sudo[75879]: pam_unix(sudo:session): session closed for user root
Dec 11 09:12:57 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Dec 11 09:12:57 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Dec 11 09:12:57 compute-0 sshd-session[75904]: Accepted publickey for ceph-admin from 192.168.122.100 port 34278 ssh2: RSA SHA256:VeIx2NZka5hi0niQjHCLie+FE2InrWghbFhBMbpMPGo
Dec 11 09:12:57 compute-0 systemd-logind[792]: New session 26 of user ceph-admin.
Dec 11 09:12:58 compute-0 systemd[1]: Started Session 26 of User ceph-admin.
Dec 11 09:12:58 compute-0 sshd-session[75904]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 11 09:12:58 compute-0 sudo[75908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:12:58 compute-0 sudo[75908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:12:58 compute-0 sudo[75908]: pam_unix(sudo:session): session closed for user root
Dec 11 09:12:58 compute-0 sshd-session[75933]: Accepted publickey for ceph-admin from 192.168.122.100 port 34292 ssh2: RSA SHA256:VeIx2NZka5hi0niQjHCLie+FE2InrWghbFhBMbpMPGo
Dec 11 09:12:58 compute-0 systemd-logind[792]: New session 27 of user ceph-admin.
Dec 11 09:12:58 compute-0 systemd[1]: Started Session 27 of User ceph-admin.
Dec 11 09:12:58 compute-0 sshd-session[75933]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 11 09:12:58 compute-0 sudo[75937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:12:58 compute-0 sudo[75937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:12:58 compute-0 sudo[75937]: pam_unix(sudo:session): session closed for user root
Dec 11 09:12:58 compute-0 ceph-mon[74426]: Deploying cephadm binary to compute-0
Dec 11 09:12:58 compute-0 sshd-session[75962]: Accepted publickey for ceph-admin from 192.168.122.100 port 34298 ssh2: RSA SHA256:VeIx2NZka5hi0niQjHCLie+FE2InrWghbFhBMbpMPGo
Dec 11 09:12:58 compute-0 systemd-logind[792]: New session 28 of user ceph-admin.
Dec 11 09:12:58 compute-0 systemd[1]: Started Session 28 of User ceph-admin.
Dec 11 09:12:58 compute-0 sshd-session[75962]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 11 09:12:58 compute-0 sudo[75966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new
Dec 11 09:12:58 compute-0 sudo[75966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:12:58 compute-0 sudo[75966]: pam_unix(sudo:session): session closed for user root
Dec 11 09:12:58 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020053002 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:12:58 compute-0 sshd-session[75991]: Accepted publickey for ceph-admin from 192.168.122.100 port 34312 ssh2: RSA SHA256:VeIx2NZka5hi0niQjHCLie+FE2InrWghbFhBMbpMPGo
Dec 11 09:12:58 compute-0 systemd-logind[792]: New session 29 of user ceph-admin.
Dec 11 09:12:58 compute-0 systemd[1]: Started Session 29 of User ceph-admin.
Dec 11 09:12:59 compute-0 sshd-session[75991]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 11 09:12:59 compute-0 ceph-mgr[74715]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 11 09:12:59 compute-0 sudo[75995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:12:59 compute-0 sudo[75995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:12:59 compute-0 sudo[75995]: pam_unix(sudo:session): session closed for user root
Dec 11 09:12:59 compute-0 sshd-session[76020]: Accepted publickey for ceph-admin from 192.168.122.100 port 34320 ssh2: RSA SHA256:VeIx2NZka5hi0niQjHCLie+FE2InrWghbFhBMbpMPGo
Dec 11 09:12:59 compute-0 systemd-logind[792]: New session 30 of user ceph-admin.
Dec 11 09:12:59 compute-0 systemd[1]: Started Session 30 of User ceph-admin.
Dec 11 09:12:59 compute-0 sshd-session[76020]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 11 09:12:59 compute-0 sudo[76024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new
Dec 11 09:12:59 compute-0 sudo[76024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:12:59 compute-0 sudo[76024]: pam_unix(sudo:session): session closed for user root
Dec 11 09:12:59 compute-0 sshd-session[76049]: Accepted publickey for ceph-admin from 192.168.122.100 port 34322 ssh2: RSA SHA256:VeIx2NZka5hi0niQjHCLie+FE2InrWghbFhBMbpMPGo
Dec 11 09:12:59 compute-0 systemd-logind[792]: New session 31 of user ceph-admin.
Dec 11 09:12:59 compute-0 systemd[1]: Started Session 31 of User ceph-admin.
Dec 11 09:12:59 compute-0 sshd-session[76049]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 11 09:13:00 compute-0 sshd-session[76076]: Accepted publickey for ceph-admin from 192.168.122.100 port 57116 ssh2: RSA SHA256:VeIx2NZka5hi0niQjHCLie+FE2InrWghbFhBMbpMPGo
Dec 11 09:13:00 compute-0 systemd-logind[792]: New session 32 of user ceph-admin.
Dec 11 09:13:00 compute-0 systemd[1]: Started Session 32 of User ceph-admin.
Dec 11 09:13:00 compute-0 sshd-session[76076]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 11 09:13:00 compute-0 sudo[76080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36
Dec 11 09:13:00 compute-0 sudo[76080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:00 compute-0 sudo[76080]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:01 compute-0 ceph-mgr[74715]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 11 09:13:01 compute-0 sshd-session[76105]: Accepted publickey for ceph-admin from 192.168.122.100 port 57132 ssh2: RSA SHA256:VeIx2NZka5hi0niQjHCLie+FE2InrWghbFhBMbpMPGo
Dec 11 09:13:01 compute-0 systemd-logind[792]: New session 33 of user ceph-admin.
Dec 11 09:13:01 compute-0 systemd[1]: Started Session 33 of User ceph-admin.
Dec 11 09:13:01 compute-0 sshd-session[76105]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 11 09:13:01 compute-0 sudo[76109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host --expect-hostname compute-0
Dec 11 09:13:01 compute-0 sudo[76109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:01 compute-0 sudo[76109]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:01 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 11 09:13:01 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:01 compute-0 ceph-mgr[74715]: [cephadm INFO root] Added host compute-0
Dec 11 09:13:01 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Added host compute-0
Dec 11 09:13:01 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec 11 09:13:01 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 11 09:13:01 compute-0 happy_lederberg[75770]: Added host 'compute-0' with addr '192.168.122.100'
Dec 11 09:13:01 compute-0 systemd[1]: libpod-e0af0b24f29fdb4f98686a841da32f603444926527fc50ce1af6f8d13800ab37.scope: Deactivated successfully.
Dec 11 09:13:01 compute-0 podman[75749]: 2025-12-11 09:13:01.984496014 +0000 UTC m=+5.964233886 container died e0af0b24f29fdb4f98686a841da32f603444926527fc50ce1af6f8d13800ab37 (image=quay.io/ceph/ceph:v19, name=happy_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Dec 11 09:13:02 compute-0 sudo[76154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:13:02 compute-0 sudo[76154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:02 compute-0 sudo[76154]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:02 compute-0 sudo[76190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph:v19 --timeout 895 pull
Dec 11 09:13:02 compute-0 sudo[76190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-046b63c1b75dfbb2c41cb9e28d69fe20de7928cc7ea91c103b087fab022a1e1c-merged.mount: Deactivated successfully.
Dec 11 09:13:02 compute-0 podman[75749]: 2025-12-11 09:13:02.239249485 +0000 UTC m=+6.218987337 container remove e0af0b24f29fdb4f98686a841da32f603444926527fc50ce1af6f8d13800ab37 (image=quay.io/ceph/ceph:v19, name=happy_lederberg, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 11 09:13:02 compute-0 systemd[1]: libpod-conmon-e0af0b24f29fdb4f98686a841da32f603444926527fc50ce1af6f8d13800ab37.scope: Deactivated successfully.
Dec 11 09:13:02 compute-0 podman[76216]: 2025-12-11 09:13:02.289260119 +0000 UTC m=+0.026320928 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:13:02 compute-0 podman[76216]: 2025-12-11 09:13:02.493758636 +0000 UTC m=+0.230819425 container create 5331f9d1d9b176b4ee0b4391dd378f36c11a3800683e3c0d9828977cef4410fd (image=quay.io/ceph/ceph:v19, name=intelligent_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Dec 11 09:13:02 compute-0 systemd[1]: Started libpod-conmon-5331f9d1d9b176b4ee0b4391dd378f36c11a3800683e3c0d9828977cef4410fd.scope.
Dec 11 09:13:02 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:13:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b2acbf5ffcf39525feb962083cc7e6f4780ede9e6f956b822a4a9a3dbc9f0c7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:13:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b2acbf5ffcf39525feb962083cc7e6f4780ede9e6f956b822a4a9a3dbc9f0c7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:13:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b2acbf5ffcf39525feb962083cc7e6f4780ede9e6f956b822a4a9a3dbc9f0c7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 11 09:13:02 compute-0 podman[76216]: 2025-12-11 09:13:02.912267238 +0000 UTC m=+0.649328047 container init 5331f9d1d9b176b4ee0b4391dd378f36c11a3800683e3c0d9828977cef4410fd (image=quay.io/ceph/ceph:v19, name=intelligent_cannon, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Dec 11 09:13:02 compute-0 podman[76216]: 2025-12-11 09:13:02.924626468 +0000 UTC m=+0.661687257 container start 5331f9d1d9b176b4ee0b4391dd378f36c11a3800683e3c0d9828977cef4410fd (image=quay.io/ceph/ceph:v19, name=intelligent_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec 11 09:13:02 compute-0 podman[76216]: 2025-12-11 09:13:02.953035646 +0000 UTC m=+0.690096445 container attach 5331f9d1d9b176b4ee0b4391dd378f36c11a3800683e3c0d9828977cef4410fd (image=quay.io/ceph/ceph:v19, name=intelligent_cannon, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:13:02 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:02 compute-0 ceph-mon[74426]: Added host compute-0
Dec 11 09:13:02 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 11 09:13:03 compute-0 ceph-mgr[74715]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 11 09:13:03 compute-0 ceph-mgr[74715]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Dec 11 09:13:03 compute-0 ceph-mgr[74715]: [cephadm INFO root] Saving service mon spec with placement count:5
Dec 11 09:13:03 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Dec 11 09:13:03 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 11 09:13:03 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:03 compute-0 intelligent_cannon[76255]: Scheduled mon update...
Dec 11 09:13:03 compute-0 systemd[1]: libpod-5331f9d1d9b176b4ee0b4391dd378f36c11a3800683e3c0d9828977cef4410fd.scope: Deactivated successfully.
Dec 11 09:13:03 compute-0 podman[76216]: 2025-12-11 09:13:03.402802833 +0000 UTC m=+1.139863632 container died 5331f9d1d9b176b4ee0b4391dd378f36c11a3800683e3c0d9828977cef4410fd (image=quay.io/ceph/ceph:v19, name=intelligent_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec 11 09:13:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b2acbf5ffcf39525feb962083cc7e6f4780ede9e6f956b822a4a9a3dbc9f0c7-merged.mount: Deactivated successfully.
Dec 11 09:13:03 compute-0 podman[76241]: 2025-12-11 09:13:03.73760952 +0000 UTC m=+1.423084291 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:13:03 compute-0 podman[76216]: 2025-12-11 09:13:03.748612365 +0000 UTC m=+1.485673154 container remove 5331f9d1d9b176b4ee0b4391dd378f36c11a3800683e3c0d9828977cef4410fd (image=quay.io/ceph/ceph:v19, name=intelligent_cannon, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec 11 09:13:03 compute-0 systemd[1]: libpod-conmon-5331f9d1d9b176b4ee0b4391dd378f36c11a3800683e3c0d9828977cef4410fd.scope: Deactivated successfully.
Dec 11 09:13:03 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054710 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:13:03 compute-0 podman[76297]: 2025-12-11 09:13:03.800724111 +0000 UTC m=+0.031142562 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:13:03 compute-0 podman[76297]: 2025-12-11 09:13:03.908550325 +0000 UTC m=+0.138968756 container create e472bdb2001eadeb1bcf5eb7d32ff3fbf0ea0cca2a9bf4aa99a3f0393dbc2e07 (image=quay.io/ceph/ceph:v19, name=focused_herschel, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 11 09:13:03 compute-0 systemd[1]: Started libpod-conmon-e472bdb2001eadeb1bcf5eb7d32ff3fbf0ea0cca2a9bf4aa99a3f0393dbc2e07.scope.
Dec 11 09:13:03 compute-0 podman[76321]: 2025-12-11 09:13:03.981650626 +0000 UTC m=+0.170355046 container create 6f9bb78cbc1a5311f2bac146fcbcb382816c5aa4bbb2ce1c2b38fa76bdea5e69 (image=quay.io/ceph/ceph:v19, name=elegant_hoover, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Dec 11 09:13:04 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:13:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca4f350cff8f18ec51cbc77fe77d91059f2c7e95f6efb00010ec2de87f550a3c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 11 09:13:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca4f350cff8f18ec51cbc77fe77d91059f2c7e95f6efb00010ec2de87f550a3c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:13:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca4f350cff8f18ec51cbc77fe77d91059f2c7e95f6efb00010ec2de87f550a3c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:13:04 compute-0 systemd[1]: Started libpod-conmon-6f9bb78cbc1a5311f2bac146fcbcb382816c5aa4bbb2ce1c2b38fa76bdea5e69.scope.
Dec 11 09:13:04 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:13:04 compute-0 podman[76321]: 2025-12-11 09:13:03.964094347 +0000 UTC m=+0.152798797 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:13:04 compute-0 podman[76297]: 2025-12-11 09:13:04.218536847 +0000 UTC m=+0.448955308 container init e472bdb2001eadeb1bcf5eb7d32ff3fbf0ea0cca2a9bf4aa99a3f0393dbc2e07 (image=quay.io/ceph/ceph:v19, name=focused_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:13:04 compute-0 podman[76297]: 2025-12-11 09:13:04.224912305 +0000 UTC m=+0.455330736 container start e472bdb2001eadeb1bcf5eb7d32ff3fbf0ea0cca2a9bf4aa99a3f0393dbc2e07 (image=quay.io/ceph/ceph:v19, name=focused_herschel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:13:04 compute-0 podman[76297]: 2025-12-11 09:13:04.3307174 +0000 UTC m=+0.561135861 container attach e472bdb2001eadeb1bcf5eb7d32ff3fbf0ea0cca2a9bf4aa99a3f0393dbc2e07 (image=quay.io/ceph/ceph:v19, name=focused_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:13:04 compute-0 podman[76321]: 2025-12-11 09:13:04.370935671 +0000 UTC m=+0.559640101 container init 6f9bb78cbc1a5311f2bac146fcbcb382816c5aa4bbb2ce1c2b38fa76bdea5e69 (image=quay.io/ceph/ceph:v19, name=elegant_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True)
Dec 11 09:13:04 compute-0 podman[76321]: 2025-12-11 09:13:04.376149708 +0000 UTC m=+0.564854118 container start 6f9bb78cbc1a5311f2bac146fcbcb382816c5aa4bbb2ce1c2b38fa76bdea5e69 (image=quay.io/ceph/ceph:v19, name=elegant_hoover, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:13:04 compute-0 podman[76321]: 2025-12-11 09:13:04.630632969 +0000 UTC m=+0.819337409 container attach 6f9bb78cbc1a5311f2bac146fcbcb382816c5aa4bbb2ce1c2b38fa76bdea5e69 (image=quay.io/ceph/ceph:v19, name=elegant_hoover, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Dec 11 09:13:04 compute-0 ceph-mon[74426]: from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Dec 11 09:13:04 compute-0 ceph-mon[74426]: Saving service mon spec with placement count:5
Dec 11 09:13:04 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:04 compute-0 ceph-mgr[74715]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Dec 11 09:13:04 compute-0 ceph-mgr[74715]: [cephadm INFO root] Saving service mgr spec with placement count:2
Dec 11 09:13:04 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Dec 11 09:13:04 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 11 09:13:04 compute-0 elegant_hoover[76343]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)
Dec 11 09:13:04 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:04 compute-0 focused_herschel[76337]: Scheduled mgr update...
Dec 11 09:13:04 compute-0 systemd[1]: libpod-6f9bb78cbc1a5311f2bac146fcbcb382816c5aa4bbb2ce1c2b38fa76bdea5e69.scope: Deactivated successfully.
Dec 11 09:13:04 compute-0 podman[76321]: 2025-12-11 09:13:04.713803323 +0000 UTC m=+0.902507763 container died 6f9bb78cbc1a5311f2bac146fcbcb382816c5aa4bbb2ce1c2b38fa76bdea5e69 (image=quay.io/ceph/ceph:v19, name=elegant_hoover, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:13:04 compute-0 systemd[1]: libpod-e472bdb2001eadeb1bcf5eb7d32ff3fbf0ea0cca2a9bf4aa99a3f0393dbc2e07.scope: Deactivated successfully.
Dec 11 09:13:04 compute-0 conmon[76337]: conmon e472bdb2001eadeb1bcf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e472bdb2001eadeb1bcf5eb7d32ff3fbf0ea0cca2a9bf4aa99a3f0393dbc2e07.scope/container/memory.events
Dec 11 09:13:04 compute-0 podman[76297]: 2025-12-11 09:13:04.860641847 +0000 UTC m=+1.091060278 container died e472bdb2001eadeb1bcf5eb7d32ff3fbf0ea0cca2a9bf4aa99a3f0393dbc2e07 (image=quay.io/ceph/ceph:v19, name=focused_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:13:05 compute-0 ceph-mgr[74715]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 11 09:13:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca4f350cff8f18ec51cbc77fe77d91059f2c7e95f6efb00010ec2de87f550a3c-merged.mount: Deactivated successfully.
Dec 11 09:13:05 compute-0 podman[76297]: 2025-12-11 09:13:05.325097423 +0000 UTC m=+1.555515854 container remove e472bdb2001eadeb1bcf5eb7d32ff3fbf0ea0cca2a9bf4aa99a3f0393dbc2e07 (image=quay.io/ceph/ceph:v19, name=focused_herschel, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 11 09:13:05 compute-0 systemd[1]: libpod-conmon-e472bdb2001eadeb1bcf5eb7d32ff3fbf0ea0cca2a9bf4aa99a3f0393dbc2e07.scope: Deactivated successfully.
Dec 11 09:13:05 compute-0 podman[76394]: 2025-12-11 09:13:05.402909514 +0000 UTC m=+0.058815165 container create a7dd2261729ead73259fe2d3e91d27ef33fe0958a4ef72e0d51c79b32a4b04b6 (image=quay.io/ceph/ceph:v19, name=lucid_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec 11 09:13:05 compute-0 podman[76394]: 2025-12-11 09:13:05.366585896 +0000 UTC m=+0.022491577 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:13:05 compute-0 systemd[1]: Started libpod-conmon-a7dd2261729ead73259fe2d3e91d27ef33fe0958a4ef72e0d51c79b32a4b04b6.scope.
Dec 11 09:13:05 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:13:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea7c9d9ae6d7e18553b9ef93c994cfc0b5b6484ca28d88fbeca61e982290ec95/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:13:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea7c9d9ae6d7e18553b9ef93c994cfc0b5b6484ca28d88fbeca61e982290ec95/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 11 09:13:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea7c9d9ae6d7e18553b9ef93c994cfc0b5b6484ca28d88fbeca61e982290ec95/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:13:05 compute-0 podman[76394]: 2025-12-11 09:13:05.719431259 +0000 UTC m=+0.375336940 container init a7dd2261729ead73259fe2d3e91d27ef33fe0958a4ef72e0d51c79b32a4b04b6 (image=quay.io/ceph/ceph:v19, name=lucid_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:13:05 compute-0 podman[76394]: 2025-12-11 09:13:05.725642331 +0000 UTC m=+0.381547982 container start a7dd2261729ead73259fe2d3e91d27ef33fe0958a4ef72e0d51c79b32a4b04b6 (image=quay.io/ceph/ceph:v19, name=lucid_kowalevski, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 11 09:13:05 compute-0 ceph-mon[74426]: from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Dec 11 09:13:05 compute-0 ceph-mon[74426]: Saving service mgr spec with placement count:2
Dec 11 09:13:05 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:05 compute-0 podman[76394]: 2025-12-11 09:13:05.819690335 +0000 UTC m=+0.475596016 container attach a7dd2261729ead73259fe2d3e91d27ef33fe0958a4ef72e0d51c79b32a4b04b6 (image=quay.io/ceph/ceph:v19, name=lucid_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:13:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-79c10364df90b68291dc98d31ed2b7abffb0b6a9b2fc4568668d8c24890bd5b2-merged.mount: Deactivated successfully.
Dec 11 09:13:05 compute-0 podman[76321]: 2025-12-11 09:13:05.963213336 +0000 UTC m=+2.151917746 container remove 6f9bb78cbc1a5311f2bac146fcbcb382816c5aa4bbb2ce1c2b38fa76bdea5e69 (image=quay.io/ceph/ceph:v19, name=elegant_hoover, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:13:05 compute-0 systemd[1]: libpod-conmon-6f9bb78cbc1a5311f2bac146fcbcb382816c5aa4bbb2ce1c2b38fa76bdea5e69.scope: Deactivated successfully.
Dec 11 09:13:06 compute-0 sudo[76190]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:06 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0)
Dec 11 09:13:06 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:06 compute-0 sudo[76435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:13:06 compute-0 sudo[76435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:06 compute-0 ceph-mgr[74715]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Dec 11 09:13:06 compute-0 sudo[76435]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:06 compute-0 ceph-mgr[74715]: [cephadm INFO root] Saving service crash spec with placement *
Dec 11 09:13:06 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Dec 11 09:13:06 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec 11 09:13:06 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:06 compute-0 lucid_kowalevski[76410]: Scheduled crash update...
Dec 11 09:13:06 compute-0 systemd[1]: libpod-a7dd2261729ead73259fe2d3e91d27ef33fe0958a4ef72e0d51c79b32a4b04b6.scope: Deactivated successfully.
Dec 11 09:13:06 compute-0 conmon[76410]: conmon a7dd2261729ead73259f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a7dd2261729ead73259fe2d3e91d27ef33fe0958a4ef72e0d51c79b32a4b04b6.scope/container/memory.events
Dec 11 09:13:06 compute-0 podman[76394]: 2025-12-11 09:13:06.155835789 +0000 UTC m=+0.811741440 container died a7dd2261729ead73259fe2d3e91d27ef33fe0958a4ef72e0d51c79b32a4b04b6 (image=quay.io/ceph/ceph:v19, name=lucid_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:13:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea7c9d9ae6d7e18553b9ef93c994cfc0b5b6484ca28d88fbeca61e982290ec95-merged.mount: Deactivated successfully.
Dec 11 09:13:06 compute-0 podman[76394]: 2025-12-11 09:13:06.190823742 +0000 UTC m=+0.846729383 container remove a7dd2261729ead73259fe2d3e91d27ef33fe0958a4ef72e0d51c79b32a4b04b6 (image=quay.io/ceph/ceph:v19, name=lucid_kowalevski, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec 11 09:13:06 compute-0 sudo[76461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Dec 11 09:13:06 compute-0 sudo[76461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:06 compute-0 systemd[1]: libpod-conmon-a7dd2261729ead73259fe2d3e91d27ef33fe0958a4ef72e0d51c79b32a4b04b6.scope: Deactivated successfully.
Dec 11 09:13:06 compute-0 podman[76497]: 2025-12-11 09:13:06.260124843 +0000 UTC m=+0.047491199 container create 7ad9d98761028879d01d58df948038f66520b743cda1943960f224f0b7467b98 (image=quay.io/ceph/ceph:v19, name=determined_mcnulty, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:13:06 compute-0 systemd[1]: Started libpod-conmon-7ad9d98761028879d01d58df948038f66520b743cda1943960f224f0b7467b98.scope.
Dec 11 09:13:06 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:13:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cda335a4ef2670cf3ef2e81e5bce5a4382b55c4e434d6520d85d1f25fa1b5bd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:13:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cda335a4ef2670cf3ef2e81e5bce5a4382b55c4e434d6520d85d1f25fa1b5bd/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 11 09:13:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cda335a4ef2670cf3ef2e81e5bce5a4382b55c4e434d6520d85d1f25fa1b5bd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:13:06 compute-0 podman[76497]: 2025-12-11 09:13:06.241276131 +0000 UTC m=+0.028642487 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:13:06 compute-0 podman[76497]: 2025-12-11 09:13:06.346686742 +0000 UTC m=+0.134053098 container init 7ad9d98761028879d01d58df948038f66520b743cda1943960f224f0b7467b98 (image=quay.io/ceph/ceph:v19, name=determined_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec 11 09:13:06 compute-0 podman[76497]: 2025-12-11 09:13:06.353421062 +0000 UTC m=+0.140787398 container start 7ad9d98761028879d01d58df948038f66520b743cda1943960f224f0b7467b98 (image=quay.io/ceph/ceph:v19, name=determined_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 11 09:13:06 compute-0 podman[76497]: 2025-12-11 09:13:06.357629955 +0000 UTC m=+0.144996311 container attach 7ad9d98761028879d01d58df948038f66520b743cda1943960f224f0b7467b98 (image=quay.io/ceph/ceph:v19, name=determined_mcnulty, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 11 09:13:06 compute-0 sudo[76461]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:06 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 11 09:13:06 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:06 compute-0 sudo[76557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:13:06 compute-0 sudo[76557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:06 compute-0 sudo[76557]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:06 compute-0 sudo[76582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Dec 11 09:13:06 compute-0 sudo[76582]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:06 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0)
Dec 11 09:13:06 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3947927012' entity='client.admin' 
Dec 11 09:13:06 compute-0 systemd[1]: libpod-7ad9d98761028879d01d58df948038f66520b743cda1943960f224f0b7467b98.scope: Deactivated successfully.
Dec 11 09:13:06 compute-0 podman[76497]: 2025-12-11 09:13:06.742029733 +0000 UTC m=+0.529396069 container died 7ad9d98761028879d01d58df948038f66520b743cda1943960f224f0b7467b98 (image=quay.io/ceph/ceph:v19, name=determined_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 11 09:13:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-3cda335a4ef2670cf3ef2e81e5bce5a4382b55c4e434d6520d85d1f25fa1b5bd-merged.mount: Deactivated successfully.
Dec 11 09:13:06 compute-0 podman[76497]: 2025-12-11 09:13:06.784347655 +0000 UTC m=+0.571713991 container remove 7ad9d98761028879d01d58df948038f66520b743cda1943960f224f0b7467b98 (image=quay.io/ceph/ceph:v19, name=determined_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 11 09:13:06 compute-0 systemd[1]: libpod-conmon-7ad9d98761028879d01d58df948038f66520b743cda1943960f224f0b7467b98.scope: Deactivated successfully.
Dec 11 09:13:06 compute-0 podman[76621]: 2025-12-11 09:13:06.847805557 +0000 UTC m=+0.042414296 container create 43fb6519bcc7e64a81fa4b804b06c4095cf1f140871be282ab24f75553ee1e3e (image=quay.io/ceph/ceph:v19, name=trusting_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:13:06 compute-0 systemd[1]: Started libpod-conmon-43fb6519bcc7e64a81fa4b804b06c4095cf1f140871be282ab24f75553ee1e3e.scope.
Dec 11 09:13:06 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:13:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11a490b6420314c88edfee7f16e4d014653f173acf0745ae77bffa5a422f379b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 11 09:13:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11a490b6420314c88edfee7f16e4d014653f173acf0745ae77bffa5a422f379b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:13:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11a490b6420314c88edfee7f16e4d014653f173acf0745ae77bffa5a422f379b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:13:06 compute-0 podman[76621]: 2025-12-11 09:13:06.915256586 +0000 UTC m=+0.109865345 container init 43fb6519bcc7e64a81fa4b804b06c4095cf1f140871be282ab24f75553ee1e3e (image=quay.io/ceph/ceph:v19, name=trusting_mayer, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325)
Dec 11 09:13:06 compute-0 podman[76621]: 2025-12-11 09:13:06.921769878 +0000 UTC m=+0.116378617 container start 43fb6519bcc7e64a81fa4b804b06c4095cf1f140871be282ab24f75553ee1e3e (image=quay.io/ceph/ceph:v19, name=trusting_mayer, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 11 09:13:06 compute-0 podman[76621]: 2025-12-11 09:13:06.827965401 +0000 UTC m=+0.022574160 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:13:06 compute-0 podman[76621]: 2025-12-11 09:13:06.925736353 +0000 UTC m=+0.120345092 container attach 43fb6519bcc7e64a81fa4b804b06c4095cf1f140871be282ab24f75553ee1e3e (image=quay.io/ceph/ceph:v19, name=trusting_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True)
Dec 11 09:13:07 compute-0 ceph-mgr[74715]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 11 09:13:07 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:07 compute-0 ceph-mon[74426]: from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Dec 11 09:13:07 compute-0 ceph-mon[74426]: Saving service crash spec with placement *
Dec 11 09:13:07 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:07 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:07 compute-0 ceph-mon[74426]: from='client.? 192.168.122.100:0/3947927012' entity='client.admin' 
Dec 11 09:13:07 compute-0 podman[76731]: 2025-12-11 09:13:07.244562306 +0000 UTC m=+0.066993653 container exec 9c08b5b1828392d2cc014af7ec9c415c9c915d4b6dd9798da9403e6929851bd4 (image=quay.io/ceph/ceph:v19, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:13:07 compute-0 ceph-mgr[74715]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 11 09:13:07 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0)
Dec 11 09:13:07 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:07 compute-0 podman[76621]: 2025-12-11 09:13:07.35824019 +0000 UTC m=+0.552848929 container died 43fb6519bcc7e64a81fa4b804b06c4095cf1f140871be282ab24f75553ee1e3e (image=quay.io/ceph/ceph:v19, name=trusting_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 11 09:13:07 compute-0 systemd[1]: libpod-43fb6519bcc7e64a81fa4b804b06c4095cf1f140871be282ab24f75553ee1e3e.scope: Deactivated successfully.
Dec 11 09:13:07 compute-0 podman[76731]: 2025-12-11 09:13:07.3758438 +0000 UTC m=+0.198275197 container exec_died 9c08b5b1828392d2cc014af7ec9c415c9c915d4b6dd9798da9403e6929851bd4 (image=quay.io/ceph/ceph:v19, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mon-compute-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:13:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-11a490b6420314c88edfee7f16e4d014653f173acf0745ae77bffa5a422f379b-merged.mount: Deactivated successfully.
Dec 11 09:13:07 compute-0 podman[76621]: 2025-12-11 09:13:07.414023621 +0000 UTC m=+0.608632350 container remove 43fb6519bcc7e64a81fa4b804b06c4095cf1f140871be282ab24f75553ee1e3e (image=quay.io/ceph/ceph:v19, name=trusting_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Dec 11 09:13:07 compute-0 systemd[1]: libpod-conmon-43fb6519bcc7e64a81fa4b804b06c4095cf1f140871be282ab24f75553ee1e3e.scope: Deactivated successfully.
Dec 11 09:13:07 compute-0 podman[76776]: 2025-12-11 09:13:07.479568084 +0000 UTC m=+0.042684605 container create 20aca395c5461302668078e58b589205746699a30ce2dd128fb84ab1ba1d8a44 (image=quay.io/ceph/ceph:v19, name=youthful_antonelli, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 11 09:13:07 compute-0 systemd[1]: Started libpod-conmon-20aca395c5461302668078e58b589205746699a30ce2dd128fb84ab1ba1d8a44.scope.
Dec 11 09:13:07 compute-0 sudo[76582]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:07 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 11 09:13:07 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:07 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:13:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb3ba741ecfb5a099616bdcd38c07be41593b2a840324d0ea81bf140f642cf33/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:13:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb3ba741ecfb5a099616bdcd38c07be41593b2a840324d0ea81bf140f642cf33/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:13:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb3ba741ecfb5a099616bdcd38c07be41593b2a840324d0ea81bf140f642cf33/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 11 09:13:07 compute-0 podman[76776]: 2025-12-11 09:13:07.460055739 +0000 UTC m=+0.023172290 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:13:07 compute-0 podman[76776]: 2025-12-11 09:13:07.55810546 +0000 UTC m=+0.121222011 container init 20aca395c5461302668078e58b589205746699a30ce2dd128fb84ab1ba1d8a44 (image=quay.io/ceph/ceph:v19, name=youthful_antonelli, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:13:07 compute-0 podman[76776]: 2025-12-11 09:13:07.565085488 +0000 UTC m=+0.128201999 container start 20aca395c5461302668078e58b589205746699a30ce2dd128fb84ab1ba1d8a44 (image=quay.io/ceph/ceph:v19, name=youthful_antonelli, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 11 09:13:07 compute-0 podman[76776]: 2025-12-11 09:13:07.569684085 +0000 UTC m=+0.132800606 container attach 20aca395c5461302668078e58b589205746699a30ce2dd128fb84ab1ba1d8a44 (image=quay.io/ceph/ceph:v19, name=youthful_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:13:07 compute-0 sudo[76809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:13:07 compute-0 sudo[76809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:07 compute-0 sudo[76809]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:07 compute-0 sudo[76836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 11 09:13:07 compute-0 sudo[76836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:07 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 76891 (sysctl)
Dec 11 09:13:07 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Dec 11 09:13:07 compute-0 ceph-mgr[74715]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 11 09:13:07 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 11 09:13:07 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Dec 11 09:13:07 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:07 compute-0 ceph-mgr[74715]: [cephadm INFO root] Added label _admin to host compute-0
Dec 11 09:13:07 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Dec 11 09:13:07 compute-0 youthful_antonelli[76806]: Added label _admin to host compute-0
Dec 11 09:13:07 compute-0 systemd[1]: libpod-20aca395c5461302668078e58b589205746699a30ce2dd128fb84ab1ba1d8a44.scope: Deactivated successfully.
Dec 11 09:13:07 compute-0 podman[76776]: 2025-12-11 09:13:07.99153839 +0000 UTC m=+0.554654931 container died 20aca395c5461302668078e58b589205746699a30ce2dd128fb84ab1ba1d8a44 (image=quay.io/ceph/ceph:v19, name=youthful_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 11 09:13:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb3ba741ecfb5a099616bdcd38c07be41593b2a840324d0ea81bf140f642cf33-merged.mount: Deactivated successfully.
Dec 11 09:13:08 compute-0 podman[76776]: 2025-12-11 09:13:08.035748386 +0000 UTC m=+0.598864907 container remove 20aca395c5461302668078e58b589205746699a30ce2dd128fb84ab1ba1d8a44 (image=quay.io/ceph/ceph:v19, name=youthful_antonelli, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:13:08 compute-0 systemd[1]: libpod-conmon-20aca395c5461302668078e58b589205746699a30ce2dd128fb84ab1ba1d8a44.scope: Deactivated successfully.
Dec 11 09:13:08 compute-0 podman[76911]: 2025-12-11 09:13:08.102223711 +0000 UTC m=+0.045274004 container create 61ffe93ccf576b5ef5928f13f41a82c7716c06a26635bea06db1b71ad790cb56 (image=quay.io/ceph/ceph:v19, name=flamboyant_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:13:08 compute-0 systemd[1]: Started libpod-conmon-61ffe93ccf576b5ef5928f13f41a82c7716c06a26635bea06db1b71ad790cb56.scope.
Dec 11 09:13:08 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:13:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/806eb066d8f350eecc8c953d32eecb5ae16f38c6bd6a7279173393fd3c033631/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:13:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/806eb066d8f350eecc8c953d32eecb5ae16f38c6bd6a7279173393fd3c033631/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 11 09:13:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/806eb066d8f350eecc8c953d32eecb5ae16f38c6bd6a7279173393fd3c033631/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:13:08 compute-0 podman[76911]: 2025-12-11 09:13:08.082650494 +0000 UTC m=+0.025700817 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:13:08 compute-0 podman[76911]: 2025-12-11 09:13:08.196022447 +0000 UTC m=+0.139072760 container init 61ffe93ccf576b5ef5928f13f41a82c7716c06a26635bea06db1b71ad790cb56 (image=quay.io/ceph/ceph:v19, name=flamboyant_austin, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 11 09:13:08 compute-0 podman[76911]: 2025-12-11 09:13:08.203006335 +0000 UTC m=+0.146056628 container start 61ffe93ccf576b5ef5928f13f41a82c7716c06a26635bea06db1b71ad790cb56 (image=quay.io/ceph/ceph:v19, name=flamboyant_austin, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:13:08 compute-0 podman[76911]: 2025-12-11 09:13:08.208102238 +0000 UTC m=+0.151152541 container attach 61ffe93ccf576b5ef5928f13f41a82c7716c06a26635bea06db1b71ad790cb56 (image=quay.io/ceph/ceph:v19, name=flamboyant_austin, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:13:08 compute-0 sudo[76836]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:08 compute-0 sudo[76949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:13:08 compute-0 sudo[76949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:08 compute-0 sudo[76949]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:08 compute-0 ceph-mon[74426]: from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 11 09:13:08 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:08 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:08 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:08 compute-0 sudo[76993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Dec 11 09:13:08 compute-0 sudo[76993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:08 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0)
Dec 11 09:13:08 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2858489282' entity='client.admin' 
Dec 11 09:13:08 compute-0 flamboyant_austin[76932]: set mgr/dashboard/cluster/status
Dec 11 09:13:08 compute-0 sudo[76993]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:08 compute-0 systemd[1]: libpod-61ffe93ccf576b5ef5928f13f41a82c7716c06a26635bea06db1b71ad790cb56.scope: Deactivated successfully.
Dec 11 09:13:08 compute-0 podman[76911]: 2025-12-11 09:13:08.68974182 +0000 UTC m=+0.632792113 container died 61ffe93ccf576b5ef5928f13f41a82c7716c06a26635bea06db1b71ad790cb56 (image=quay.io/ceph/ceph:v19, name=flamboyant_austin, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 11 09:13:08 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 11 09:13:08 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-806eb066d8f350eecc8c953d32eecb5ae16f38c6bd6a7279173393fd3c033631-merged.mount: Deactivated successfully.
Dec 11 09:13:08 compute-0 podman[76911]: 2025-12-11 09:13:08.743036186 +0000 UTC m=+0.686086479 container remove 61ffe93ccf576b5ef5928f13f41a82c7716c06a26635bea06db1b71ad790cb56 (image=quay.io/ceph/ceph:v19, name=flamboyant_austin, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 11 09:13:08 compute-0 sudo[77046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:13:08 compute-0 systemd[1]: libpod-conmon-61ffe93ccf576b5ef5928f13f41a82c7716c06a26635bea06db1b71ad790cb56.scope: Deactivated successfully.
Dec 11 09:13:08 compute-0 sudo[77046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:08 compute-0 sudo[77046]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:08 compute-0 sudo[77074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -- inventory --format=json-pretty --filter-for-batch
Dec 11 09:13:08 compute-0 sudo[77074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:08 compute-0 sudo[73387]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:08 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:13:09 compute-0 ceph-mgr[74715]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 11 09:13:09 compute-0 sudo[77164]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emofkrytgbtszavrrvacjjexdcrwzdxz ; /usr/bin/python3'
Dec 11 09:13:09 compute-0 sudo[77164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:13:09 compute-0 podman[77160]: 2025-12-11 09:13:09.218839368 +0000 UTC m=+0.038658118 container create b19c8ad3c21903b7ce3eceee7a6573632b7d7c15bdbca24f0f56c104df5bb23e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_payne, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 11 09:13:09 compute-0 systemd[1]: Started libpod-conmon-b19c8ad3c21903b7ce3eceee7a6573632b7d7c15bdbca24f0f56c104df5bb23e.scope.
Dec 11 09:13:09 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:13:09 compute-0 podman[77160]: 2025-12-11 09:13:09.203710673 +0000 UTC m=+0.023529443 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:13:09 compute-0 podman[77160]: 2025-12-11 09:13:09.299245468 +0000 UTC m=+0.119064228 container init b19c8ad3c21903b7ce3eceee7a6573632b7d7c15bdbca24f0f56c104df5bb23e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_payne, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid)
Dec 11 09:13:09 compute-0 podman[77160]: 2025-12-11 09:13:09.308250974 +0000 UTC m=+0.128069724 container start b19c8ad3c21903b7ce3eceee7a6573632b7d7c15bdbca24f0f56c104df5bb23e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_payne, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:13:09 compute-0 flamboyant_payne[77182]: 167 167
Dec 11 09:13:09 compute-0 podman[77160]: 2025-12-11 09:13:09.312302013 +0000 UTC m=+0.132120783 container attach b19c8ad3c21903b7ce3eceee7a6573632b7d7c15bdbca24f0f56c104df5bb23e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_payne, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec 11 09:13:09 compute-0 systemd[1]: libpod-b19c8ad3c21903b7ce3eceee7a6573632b7d7c15bdbca24f0f56c104df5bb23e.scope: Deactivated successfully.
Dec 11 09:13:09 compute-0 podman[77160]: 2025-12-11 09:13:09.313754042 +0000 UTC m=+0.133572782 container died b19c8ad3c21903b7ce3eceee7a6573632b7d7c15bdbca24f0f56c104df5bb23e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:13:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-9247cb9d650b412b787eeb34daedeff4d12e974a744c0011a2e2f1b19ce5abfa-merged.mount: Deactivated successfully.
Dec 11 09:13:09 compute-0 ceph-mon[74426]: from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 11 09:13:09 compute-0 ceph-mon[74426]: Added label _admin to host compute-0
Dec 11 09:13:09 compute-0 ceph-mon[74426]: from='client.? 192.168.122.100:0/2858489282' entity='client.admin' 
Dec 11 09:13:09 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:09 compute-0 python3[77176]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:13:09 compute-0 podman[77160]: 2025-12-11 09:13:09.355974461 +0000 UTC m=+0.175793211 container remove b19c8ad3c21903b7ce3eceee7a6573632b7d7c15bdbca24f0f56c104df5bb23e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:13:09 compute-0 systemd[1]: libpod-conmon-b19c8ad3c21903b7ce3eceee7a6573632b7d7c15bdbca24f0f56c104df5bb23e.scope: Deactivated successfully.
Dec 11 09:13:09 compute-0 podman[77199]: 2025-12-11 09:13:09.418682618 +0000 UTC m=+0.050866285 container create 41cb95cfd866a92fb5ec8ef5677cb30539c7b8ab8943e2966c5d4fef377cd019 (image=quay.io/ceph/ceph:v19, name=pensive_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec 11 09:13:09 compute-0 systemd[1]: Started libpod-conmon-41cb95cfd866a92fb5ec8ef5677cb30539c7b8ab8943e2966c5d4fef377cd019.scope.
Dec 11 09:13:09 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:13:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a897d587b057a6ab819aa5b66c17536ec769d47eeaf6024d624843cbd757a66/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:13:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a897d587b057a6ab819aa5b66c17536ec769d47eeaf6024d624843cbd757a66/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:13:09 compute-0 podman[77199]: 2025-12-11 09:13:09.398456068 +0000 UTC m=+0.030639765 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:13:09 compute-0 podman[77199]: 2025-12-11 09:13:09.4959546 +0000 UTC m=+0.128138317 container init 41cb95cfd866a92fb5ec8ef5677cb30539c7b8ab8943e2966c5d4fef377cd019 (image=quay.io/ceph/ceph:v19, name=pensive_euclid, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec 11 09:13:09 compute-0 podman[77199]: 2025-12-11 09:13:09.503359703 +0000 UTC m=+0.135543380 container start 41cb95cfd866a92fb5ec8ef5677cb30539c7b8ab8943e2966c5d4fef377cd019 (image=quay.io/ceph/ceph:v19, name=pensive_euclid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:13:09 compute-0 podman[77199]: 2025-12-11 09:13:09.50709629 +0000 UTC m=+0.139279957 container attach 41cb95cfd866a92fb5ec8ef5677cb30539c7b8ab8943e2966c5d4fef377cd019 (image=quay.io/ceph/ceph:v19, name=pensive_euclid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 11 09:13:09 compute-0 podman[77224]: 2025-12-11 09:13:09.5176874 +0000 UTC m=+0.046873338 container create db169b28d83e1d602c42af3423cf0ed06c16a30dad0b0a5c4814508c6e53b2d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec 11 09:13:09 compute-0 systemd[1]: Started libpod-conmon-db169b28d83e1d602c42af3423cf0ed06c16a30dad0b0a5c4814508c6e53b2d4.scope.
Dec 11 09:13:09 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:13:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b516eee7c33c5952f648e27271b0fe09a91afee84db697c06160fcf0e2c1be3d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 11 09:13:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b516eee7c33c5952f648e27271b0fe09a91afee84db697c06160fcf0e2c1be3d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:13:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b516eee7c33c5952f648e27271b0fe09a91afee84db697c06160fcf0e2c1be3d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:13:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b516eee7c33c5952f648e27271b0fe09a91afee84db697c06160fcf0e2c1be3d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 11 09:13:09 compute-0 podman[77224]: 2025-12-11 09:13:09.497676759 +0000 UTC m=+0.026862707 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:13:09 compute-0 podman[77224]: 2025-12-11 09:13:09.610770272 +0000 UTC m=+0.139956200 container init db169b28d83e1d602c42af3423cf0ed06c16a30dad0b0a5c4814508c6e53b2d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_jepsen, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 11 09:13:09 compute-0 podman[77224]: 2025-12-11 09:13:09.619124207 +0000 UTC m=+0.148310125 container start db169b28d83e1d602c42af3423cf0ed06c16a30dad0b0a5c4814508c6e53b2d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_jepsen, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Dec 11 09:13:09 compute-0 podman[77224]: 2025-12-11 09:13:09.623534128 +0000 UTC m=+0.152720066 container attach db169b28d83e1d602c42af3423cf0ed06c16a30dad0b0a5c4814508c6e53b2d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_jepsen, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Dec 11 09:13:09 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0)
Dec 11 09:13:09 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1223531895' entity='client.admin' 
Dec 11 09:13:09 compute-0 systemd[1]: libpod-41cb95cfd866a92fb5ec8ef5677cb30539c7b8ab8943e2966c5d4fef377cd019.scope: Deactivated successfully.
Dec 11 09:13:09 compute-0 podman[77199]: 2025-12-11 09:13:09.908792387 +0000 UTC m=+0.540976064 container died 41cb95cfd866a92fb5ec8ef5677cb30539c7b8ab8943e2966c5d4fef377cd019 (image=quay.io/ceph/ceph:v19, name=pensive_euclid, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec 11 09:13:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a897d587b057a6ab819aa5b66c17536ec769d47eeaf6024d624843cbd757a66-merged.mount: Deactivated successfully.
Dec 11 09:13:09 compute-0 podman[77199]: 2025-12-11 09:13:09.94790412 +0000 UTC m=+0.580087797 container remove 41cb95cfd866a92fb5ec8ef5677cb30539c7b8ab8943e2966c5d4fef377cd019 (image=quay.io/ceph/ceph:v19, name=pensive_euclid, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 11 09:13:09 compute-0 systemd[1]: libpod-conmon-41cb95cfd866a92fb5ec8ef5677cb30539c7b8ab8943e2966c5d4fef377cd019.scope: Deactivated successfully.
Dec 11 09:13:09 compute-0 sudo[77164]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:10 compute-0 inspiring_jepsen[77241]: [
Dec 11 09:13:10 compute-0 inspiring_jepsen[77241]:     {
Dec 11 09:13:10 compute-0 inspiring_jepsen[77241]:         "available": false,
Dec 11 09:13:10 compute-0 inspiring_jepsen[77241]:         "being_replaced": false,
Dec 11 09:13:10 compute-0 inspiring_jepsen[77241]:         "ceph_device_lvm": false,
Dec 11 09:13:10 compute-0 inspiring_jepsen[77241]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 11 09:13:10 compute-0 inspiring_jepsen[77241]:         "lsm_data": {},
Dec 11 09:13:10 compute-0 inspiring_jepsen[77241]:         "lvs": [],
Dec 11 09:13:10 compute-0 inspiring_jepsen[77241]:         "path": "/dev/sr0",
Dec 11 09:13:10 compute-0 inspiring_jepsen[77241]:         "rejected_reasons": [
Dec 11 09:13:10 compute-0 inspiring_jepsen[77241]:             "Insufficient space (<5GB)",
Dec 11 09:13:10 compute-0 inspiring_jepsen[77241]:             "Has a FileSystem"
Dec 11 09:13:10 compute-0 inspiring_jepsen[77241]:         ],
Dec 11 09:13:10 compute-0 inspiring_jepsen[77241]:         "sys_api": {
Dec 11 09:13:10 compute-0 inspiring_jepsen[77241]:             "actuators": null,
Dec 11 09:13:10 compute-0 inspiring_jepsen[77241]:             "device_nodes": [
Dec 11 09:13:10 compute-0 inspiring_jepsen[77241]:                 "sr0"
Dec 11 09:13:10 compute-0 inspiring_jepsen[77241]:             ],
Dec 11 09:13:10 compute-0 inspiring_jepsen[77241]:             "devname": "sr0",
Dec 11 09:13:10 compute-0 inspiring_jepsen[77241]:             "human_readable_size": "482.00 KB",
Dec 11 09:13:10 compute-0 inspiring_jepsen[77241]:             "id_bus": "ata",
Dec 11 09:13:10 compute-0 inspiring_jepsen[77241]:             "model": "QEMU DVD-ROM",
Dec 11 09:13:10 compute-0 inspiring_jepsen[77241]:             "nr_requests": "2",
Dec 11 09:13:10 compute-0 inspiring_jepsen[77241]:             "parent": "/dev/sr0",
Dec 11 09:13:10 compute-0 inspiring_jepsen[77241]:             "partitions": {},
Dec 11 09:13:10 compute-0 inspiring_jepsen[77241]:             "path": "/dev/sr0",
Dec 11 09:13:10 compute-0 inspiring_jepsen[77241]:             "removable": "1",
Dec 11 09:13:10 compute-0 inspiring_jepsen[77241]:             "rev": "2.5+",
Dec 11 09:13:10 compute-0 inspiring_jepsen[77241]:             "ro": "0",
Dec 11 09:13:10 compute-0 inspiring_jepsen[77241]:             "rotational": "1",
Dec 11 09:13:10 compute-0 inspiring_jepsen[77241]:             "sas_address": "",
Dec 11 09:13:10 compute-0 inspiring_jepsen[77241]:             "sas_device_handle": "",
Dec 11 09:13:10 compute-0 inspiring_jepsen[77241]:             "scheduler_mode": "mq-deadline",
Dec 11 09:13:10 compute-0 inspiring_jepsen[77241]:             "sectors": 0,
Dec 11 09:13:10 compute-0 inspiring_jepsen[77241]:             "sectorsize": "2048",
Dec 11 09:13:10 compute-0 inspiring_jepsen[77241]:             "size": 493568.0,
Dec 11 09:13:10 compute-0 inspiring_jepsen[77241]:             "support_discard": "2048",
Dec 11 09:13:10 compute-0 inspiring_jepsen[77241]:             "type": "disk",
Dec 11 09:13:10 compute-0 inspiring_jepsen[77241]:             "vendor": "QEMU"
Dec 11 09:13:10 compute-0 inspiring_jepsen[77241]:         }
Dec 11 09:13:10 compute-0 inspiring_jepsen[77241]:     }
Dec 11 09:13:10 compute-0 inspiring_jepsen[77241]: ]
Dec 11 09:13:10 compute-0 systemd[1]: libpod-db169b28d83e1d602c42af3423cf0ed06c16a30dad0b0a5c4814508c6e53b2d4.scope: Deactivated successfully.
Dec 11 09:13:10 compute-0 podman[77224]: 2025-12-11 09:13:10.392975785 +0000 UTC m=+0.922161703 container died db169b28d83e1d602c42af3423cf0ed06c16a30dad0b0a5c4814508c6e53b2d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_jepsen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec 11 09:13:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-b516eee7c33c5952f648e27271b0fe09a91afee84db697c06160fcf0e2c1be3d-merged.mount: Deactivated successfully.
Dec 11 09:13:10 compute-0 podman[77224]: 2025-12-11 09:13:10.505887872 +0000 UTC m=+1.035073790 container remove db169b28d83e1d602c42af3423cf0ed06c16a30dad0b0a5c4814508c6e53b2d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_jepsen, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Dec 11 09:13:10 compute-0 systemd[1]: libpod-conmon-db169b28d83e1d602c42af3423cf0ed06c16a30dad0b0a5c4814508c6e53b2d4.scope: Deactivated successfully.
Dec 11 09:13:10 compute-0 sudo[77074]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:10 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 11 09:13:10 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:10 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 11 09:13:10 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:10 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 11 09:13:10 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:10 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 11 09:13:10 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:10 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec 11 09:13:10 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 11 09:13:10 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 11 09:13:10 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:13:10 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 11 09:13:10 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 11 09:13:10 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Dec 11 09:13:10 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Dec 11 09:13:10 compute-0 sudo[78423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 11 09:13:10 compute-0 sudo[78423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:10 compute-0 sudo[78423]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:10 compute-0 sudo[78448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph
Dec 11 09:13:10 compute-0 sudo[78448]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:10 compute-0 sudo[78448]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:10 compute-0 sudo[78474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.conf.new
Dec 11 09:13:10 compute-0 sudo[78474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:10 compute-0 sudo[78474]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:10 compute-0 sudo[78523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:13:10 compute-0 sudo[78523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:10 compute-0 sudo[78523]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:10 compute-0 ceph-mon[74426]: from='client.? 192.168.122.100:0/1223531895' entity='client.admin' 
Dec 11 09:13:10 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:10 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:10 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:10 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:10 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 11 09:13:10 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:13:10 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 11 09:13:10 compute-0 sudo[78571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.conf.new
Dec 11 09:13:10 compute-0 sudo[78571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:10 compute-0 sudo[78571]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:10 compute-0 sudo[78618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqbkcykvphyjvmvdxbtqbuclidzkfkjv ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765444390.3149214-37005-208979722111630/async_wrapper.py j935090594840 30 /home/zuul/.ansible/tmp/ansible-tmp-1765444390.3149214-37005-208979722111630/AnsiballZ_command.py _'
Dec 11 09:13:10 compute-0 sudo[78618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:13:11 compute-0 sudo[78646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.conf.new
Dec 11 09:13:11 compute-0 sudo[78646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:11 compute-0 sudo[78646]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:11 compute-0 ceph-mgr[74715]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Dec 11 09:13:11 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:13:11 compute-0 ceph-mon[74426]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Dec 11 09:13:11 compute-0 ansible-async_wrapper.py[78622]: Invoked with j935090594840 30 /home/zuul/.ansible/tmp/ansible-tmp-1765444390.3149214-37005-208979722111630/AnsiballZ_command.py _
Dec 11 09:13:11 compute-0 ansible-async_wrapper.py[78693]: Starting module and watcher
Dec 11 09:13:11 compute-0 ansible-async_wrapper.py[78693]: Start watching 78695 (30)
Dec 11 09:13:11 compute-0 ansible-async_wrapper.py[78695]: Start module (78695)
Dec 11 09:13:11 compute-0 ansible-async_wrapper.py[78622]: Return async_wrapper task started.
Dec 11 09:13:11 compute-0 sudo[78671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.conf.new
Dec 11 09:13:11 compute-0 sudo[78671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:11 compute-0 sudo[78618]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:11 compute-0 sudo[78671]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:11 compute-0 sudo[78701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 11 09:13:11 compute-0 sudo[78701]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:11 compute-0 sudo[78701]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:11 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf
Dec 11 09:13:11 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf
Dec 11 09:13:11 compute-0 sudo[78726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config
Dec 11 09:13:11 compute-0 sudo[78726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:11 compute-0 sudo[78726]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:11 compute-0 python3[78698]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:13:11 compute-0 sudo[78751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config
Dec 11 09:13:11 compute-0 sudo[78751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:11 compute-0 sudo[78751]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:11 compute-0 podman[78752]: 2025-12-11 09:13:11.271416837 +0000 UTC m=+0.040913314 container create 49834120cd34f9e5022777c66494fb6440cc2afe0af65c2adf9cdf47c76513c8 (image=quay.io/ceph/ceph:v19, name=hopeful_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 11 09:13:11 compute-0 systemd[1]: Started libpod-conmon-49834120cd34f9e5022777c66494fb6440cc2afe0af65c2adf9cdf47c76513c8.scope.
Dec 11 09:13:11 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:13:11 compute-0 sudo[78790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf.new
Dec 11 09:13:11 compute-0 sudo[78790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9109d7022f03282c07f723cceb97dc20eedcf83f9b6d40a2f74f41754456062/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:13:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9109d7022f03282c07f723cceb97dc20eedcf83f9b6d40a2f74f41754456062/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:13:11 compute-0 podman[78752]: 2025-12-11 09:13:11.253880999 +0000 UTC m=+0.023377496 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:13:11 compute-0 sudo[78790]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:11 compute-0 podman[78752]: 2025-12-11 09:13:11.362288644 +0000 UTC m=+0.131785151 container init 49834120cd34f9e5022777c66494fb6440cc2afe0af65c2adf9cdf47c76513c8 (image=quay.io/ceph/ceph:v19, name=hopeful_buck, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:13:11 compute-0 podman[78752]: 2025-12-11 09:13:11.368111993 +0000 UTC m=+0.137608460 container start 49834120cd34f9e5022777c66494fb6440cc2afe0af65c2adf9cdf47c76513c8 (image=quay.io/ceph/ceph:v19, name=hopeful_buck, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:13:11 compute-0 podman[78752]: 2025-12-11 09:13:11.370883897 +0000 UTC m=+0.140380374 container attach 49834120cd34f9e5022777c66494fb6440cc2afe0af65c2adf9cdf47c76513c8 (image=quay.io/ceph/ceph:v19, name=hopeful_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:13:11 compute-0 sudo[78820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:13:11 compute-0 sudo[78820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:11 compute-0 sudo[78820]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:11 compute-0 sudo[78846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf.new
Dec 11 09:13:11 compute-0 sudo[78846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:11 compute-0 sudo[78846]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:11 compute-0 sudo[78913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf.new
Dec 11 09:13:11 compute-0 sudo[78913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:11 compute-0 sudo[78913]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:11 compute-0 sudo[78938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf.new
Dec 11 09:13:11 compute-0 sudo[78938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:11 compute-0 sudo[78938]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:11 compute-0 sudo[78963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf.new /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf
Dec 11 09:13:11 compute-0 sudo[78963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:11 compute-0 sudo[78963]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:11 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 11 09:13:11 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 11 09:13:11 compute-0 ceph-mgr[74715]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 11 09:13:11 compute-0 hopeful_buck[78815]: 
Dec 11 09:13:11 compute-0 hopeful_buck[78815]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec 11 09:13:11 compute-0 sudo[78988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 11 09:13:11 compute-0 sudo[78988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:11 compute-0 sudo[78988]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:11 compute-0 systemd[1]: libpod-49834120cd34f9e5022777c66494fb6440cc2afe0af65c2adf9cdf47c76513c8.scope: Deactivated successfully.
Dec 11 09:13:11 compute-0 podman[78752]: 2025-12-11 09:13:11.890024976 +0000 UTC m=+0.659521453 container died 49834120cd34f9e5022777c66494fb6440cc2afe0af65c2adf9cdf47c76513c8 (image=quay.io/ceph/ceph:v19, name=hopeful_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:13:11 compute-0 ceph-mon[74426]: Updating compute-0:/etc/ceph/ceph.conf
Dec 11 09:13:11 compute-0 ceph-mon[74426]: pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:13:11 compute-0 ceph-mon[74426]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Dec 11 09:13:11 compute-0 ceph-mon[74426]: Updating compute-0:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf
Dec 11 09:13:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9109d7022f03282c07f723cceb97dc20eedcf83f9b6d40a2f74f41754456062-merged.mount: Deactivated successfully.
Dec 11 09:13:11 compute-0 podman[78752]: 2025-12-11 09:13:11.931358414 +0000 UTC m=+0.700854891 container remove 49834120cd34f9e5022777c66494fb6440cc2afe0af65c2adf9cdf47c76513c8 (image=quay.io/ceph/ceph:v19, name=hopeful_buck, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:13:11 compute-0 systemd[1]: libpod-conmon-49834120cd34f9e5022777c66494fb6440cc2afe0af65c2adf9cdf47c76513c8.scope: Deactivated successfully.
Dec 11 09:13:11 compute-0 sudo[79015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph
Dec 11 09:13:11 compute-0 sudo[79015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:11 compute-0 sudo[79015]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:11 compute-0 ansible-async_wrapper.py[78695]: Module complete (78695)
Dec 11 09:13:12 compute-0 sudo[79052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.client.admin.keyring.new
Dec 11 09:13:12 compute-0 sudo[79052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:12 compute-0 sudo[79052]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:12 compute-0 sudo[79077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:13:12 compute-0 sudo[79077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:12 compute-0 sudo[79077]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:12 compute-0 sudo[79102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.client.admin.keyring.new
Dec 11 09:13:12 compute-0 sudo[79102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:12 compute-0 sudo[79102]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:12 compute-0 sudo[79173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.client.admin.keyring.new
Dec 11 09:13:12 compute-0 sudo[79173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:12 compute-0 sudo[79173]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:12 compute-0 sudo[79198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.client.admin.keyring.new
Dec 11 09:13:12 compute-0 sudo[79198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:12 compute-0 sudo[79198]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:12 compute-0 sudo[79266]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vunjmfnmyrefkucrhogiiadpneozyqhl ; /usr/bin/python3'
Dec 11 09:13:12 compute-0 sudo[79266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:13:12 compute-0 sudo[79228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Dec 11 09:13:12 compute-0 sudo[79228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:12 compute-0 sudo[79228]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:12 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring
Dec 11 09:13:12 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring
Dec 11 09:13:12 compute-0 sudo[79274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config
Dec 11 09:13:12 compute-0 sudo[79274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:12 compute-0 sudo[79274]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:12 compute-0 sudo[79299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config
Dec 11 09:13:12 compute-0 sudo[79299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:12 compute-0 sudo[79299]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:12 compute-0 python3[79271]: ansible-ansible.legacy.async_status Invoked with jid=j935090594840.78622 mode=status _async_dir=/root/.ansible_async
Dec 11 09:13:12 compute-0 sudo[79266]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:12 compute-0 sudo[79324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring.new
Dec 11 09:13:12 compute-0 sudo[79324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:12 compute-0 sudo[79324]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:12 compute-0 sudo[79352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:13:12 compute-0 sudo[79352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:12 compute-0 sudo[79352]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:12 compute-0 sudo[79440]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bckbdifoeahzebmziwqjcingbrbwlwud ; /usr/bin/python3'
Dec 11 09:13:12 compute-0 sudo[79440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:13:12 compute-0 sudo[79402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring.new
Dec 11 09:13:12 compute-0 sudo[79402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:12 compute-0 sudo[79402]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:12 compute-0 sudo[79471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring.new
Dec 11 09:13:12 compute-0 sudo[79471]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:12 compute-0 sudo[79471]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:12 compute-0 ceph-mon[74426]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 11 09:13:12 compute-0 ceph-mon[74426]: from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 11 09:13:12 compute-0 ceph-mon[74426]: Updating compute-0:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring
Dec 11 09:13:12 compute-0 python3[79446]: ansible-ansible.legacy.async_status Invoked with jid=j935090594840.78622 mode=cleanup _async_dir=/root/.ansible_async
Dec 11 09:13:12 compute-0 sudo[79440]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:12 compute-0 sudo[79496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring.new
Dec 11 09:13:12 compute-0 sudo[79496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:12 compute-0 sudo[79496]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:13 compute-0 sudo[79521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring.new /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring
Dec 11 09:13:13 compute-0 sudo[79521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:13 compute-0 sudo[79521]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:13 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 11 09:13:13 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:13:13 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:13 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 11 09:13:13 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:13 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 11 09:13:13 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:13 compute-0 ceph-mgr[74715]: [progress INFO root] update: starting ev 13365d85-b70f-4d3f-b66e-dbf5c0d49453 (Updating crash deployment (+1 -> 1))
Dec 11 09:13:13 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec 11 09:13:13 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 11 09:13:13 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec 11 09:13:13 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 11 09:13:13 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:13:13 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Dec 11 09:13:13 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Dec 11 09:13:13 compute-0 sudo[79546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:13:13 compute-0 sudo[79546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:13 compute-0 sudo[79546]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:13 compute-0 sudo[79571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:13:13 compute-0 sudo[79571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:13 compute-0 sudo[79619]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxdlbjmyvvprllttoqftnjsuotkuzcll ; /usr/bin/python3'
Dec 11 09:13:13 compute-0 sudo[79619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:13:13 compute-0 python3[79621]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 11 09:13:13 compute-0 sudo[79619]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:13 compute-0 podman[79665]: 2025-12-11 09:13:13.573365804 +0000 UTC m=+0.042159437 container create 22f4208555fbfff72af63c3009af95ae40456cf10c706131938ecd93307a016c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_dewdney, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:13:13 compute-0 systemd[1]: Started libpod-conmon-22f4208555fbfff72af63c3009af95ae40456cf10c706131938ecd93307a016c.scope.
Dec 11 09:13:13 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:13:13 compute-0 podman[79665]: 2025-12-11 09:13:13.647867282 +0000 UTC m=+0.116660935 container init 22f4208555fbfff72af63c3009af95ae40456cf10c706131938ecd93307a016c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_dewdney, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Dec 11 09:13:13 compute-0 podman[79665]: 2025-12-11 09:13:13.556388786 +0000 UTC m=+0.025182439 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:13:13 compute-0 podman[79665]: 2025-12-11 09:13:13.654367994 +0000 UTC m=+0.123161627 container start 22f4208555fbfff72af63c3009af95ae40456cf10c706131938ecd93307a016c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 11 09:13:13 compute-0 podman[79665]: 2025-12-11 09:13:13.657225581 +0000 UTC m=+0.126019234 container attach 22f4208555fbfff72af63c3009af95ae40456cf10c706131938ecd93307a016c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_dewdney, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Dec 11 09:13:13 compute-0 gifted_dewdney[79682]: 167 167
Dec 11 09:13:13 compute-0 systemd[1]: libpod-22f4208555fbfff72af63c3009af95ae40456cf10c706131938ecd93307a016c.scope: Deactivated successfully.
Dec 11 09:13:13 compute-0 podman[79665]: 2025-12-11 09:13:13.659462808 +0000 UTC m=+0.128256441 container died 22f4208555fbfff72af63c3009af95ae40456cf10c706131938ecd93307a016c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 11 09:13:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-d143663b80d947441ef65c97f24777789afce67e9f51d9fd9537ca6dbfa9aead-merged.mount: Deactivated successfully.
Dec 11 09:13:13 compute-0 podman[79665]: 2025-12-11 09:13:13.69357665 +0000 UTC m=+0.162370283 container remove 22f4208555fbfff72af63c3009af95ae40456cf10c706131938ecd93307a016c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_dewdney, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:13:13 compute-0 systemd[1]: libpod-conmon-22f4208555fbfff72af63c3009af95ae40456cf10c706131938ecd93307a016c.scope: Deactivated successfully.
Dec 11 09:13:13 compute-0 systemd[1]: Reloading.
Dec 11 09:13:13 compute-0 systemd-rc-local-generator[79747]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:13:13 compute-0 systemd-sysv-generator[79752]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:13:13 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:13:13 compute-0 sudo[79755]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-miktymdbgemyegqjmhgiiyikdwyjhshb ; /usr/bin/python3'
Dec 11 09:13:13 compute-0 sudo[79755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:13:14 compute-0 systemd[1]: Reloading.
Dec 11 09:13:14 compute-0 ceph-mon[74426]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:13:14 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:14 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:14 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:14 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 11 09:13:14 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec 11 09:13:14 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:13:14 compute-0 ceph-mon[74426]: Deploying daemon crash.compute-0 on compute-0
Dec 11 09:13:14 compute-0 systemd-sysv-generator[79798]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:13:14 compute-0 systemd-rc-local-generator[79794]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:13:14 compute-0 python3[79762]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:13:14 compute-0 podman[79802]: 2025-12-11 09:13:14.209543992 +0000 UTC m=+0.049569131 container create f2003814ca5b4863739692e7ab3b9fadeea93a7faba9398d8be04f8f44dff478 (image=quay.io/ceph/ceph:v19, name=affectionate_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec 11 09:13:14 compute-0 podman[79802]: 2025-12-11 09:13:14.19102303 +0000 UTC m=+0.031048189 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:13:14 compute-0 systemd[1]: Started libpod-conmon-f2003814ca5b4863739692e7ab3b9fadeea93a7faba9398d8be04f8f44dff478.scope.
Dec 11 09:13:14 compute-0 systemd[1]: Starting Ceph crash.compute-0 for 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060...
Dec 11 09:13:14 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:13:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8ee5b775f41a77205da901f9c0b0fabaf432d97f8bb5fb2e33c841846a5280b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 11 09:13:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8ee5b775f41a77205da901f9c0b0fabaf432d97f8bb5fb2e33c841846a5280b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:13:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8ee5b775f41a77205da901f9c0b0fabaf432d97f8bb5fb2e33c841846a5280b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:13:14 compute-0 podman[79802]: 2025-12-11 09:13:14.343262997 +0000 UTC m=+0.183288156 container init f2003814ca5b4863739692e7ab3b9fadeea93a7faba9398d8be04f8f44dff478 (image=quay.io/ceph/ceph:v19, name=affectionate_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:13:14 compute-0 podman[79802]: 2025-12-11 09:13:14.35418215 +0000 UTC m=+0.194207289 container start f2003814ca5b4863739692e7ab3b9fadeea93a7faba9398d8be04f8f44dff478 (image=quay.io/ceph/ceph:v19, name=affectionate_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:13:14 compute-0 podman[79802]: 2025-12-11 09:13:14.359580603 +0000 UTC m=+0.199605742 container attach f2003814ca5b4863739692e7ab3b9fadeea93a7faba9398d8be04f8f44dff478 (image=quay.io/ceph/ceph:v19, name=affectionate_cannon, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Dec 11 09:13:14 compute-0 podman[79885]: 2025-12-11 09:13:14.568578984 +0000 UTC m=+0.048513593 container create ee03acbd9a69a14abc6e669457d1d3917e304e3934cfbbda4afd13046b562a51 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-crash-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Dec 11 09:13:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6a03ecf64ed0d49bb16ca560670859b2567b2720bfa85780c6935891fd302a5/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Dec 11 09:13:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6a03ecf64ed0d49bb16ca560670859b2567b2720bfa85780c6935891fd302a5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:13:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6a03ecf64ed0d49bb16ca560670859b2567b2720bfa85780c6935891fd302a5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:13:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6a03ecf64ed0d49bb16ca560670859b2567b2720bfa85780c6935891fd302a5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 11 09:13:14 compute-0 podman[79885]: 2025-12-11 09:13:14.644829473 +0000 UTC m=+0.124764102 container init ee03acbd9a69a14abc6e669457d1d3917e304e3934cfbbda4afd13046b562a51 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-crash-compute-0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:13:14 compute-0 podman[79885]: 2025-12-11 09:13:14.550499219 +0000 UTC m=+0.030433858 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:13:14 compute-0 podman[79885]: 2025-12-11 09:13:14.650444074 +0000 UTC m=+0.130378683 container start ee03acbd9a69a14abc6e669457d1d3917e304e3934cfbbda4afd13046b562a51 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-crash-compute-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 11 09:13:14 compute-0 bash[79885]: ee03acbd9a69a14abc6e669457d1d3917e304e3934cfbbda4afd13046b562a51
Dec 11 09:13:14 compute-0 systemd[1]: Started Ceph crash.compute-0 for 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060.
Dec 11 09:13:14 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-crash-compute-0[79900]: INFO:ceph-crash:pinging cluster to exercise our key
Dec 11 09:13:14 compute-0 sudo[79571]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:14 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 11 09:13:14 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:14 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 11 09:13:14 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:14 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec 11 09:13:14 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:14 compute-0 ceph-mgr[74715]: [progress INFO root] complete: finished ev 13365d85-b70f-4d3f-b66e-dbf5c0d49453 (Updating crash deployment (+1 -> 1))
Dec 11 09:13:14 compute-0 ceph-mgr[74715]: [progress INFO root] Completed event 13365d85-b70f-4d3f-b66e-dbf5c0d49453 (Updating crash deployment (+1 -> 1)) in 2 seconds
Dec 11 09:13:14 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec 11 09:13:14 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:14 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 11 09:13:14 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:14 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 11 09:13:14 compute-0 ceph-mgr[74715]: log_channel(audit) log [DBG] : from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 11 09:13:14 compute-0 affectionate_cannon[79819]: 
Dec 11 09:13:14 compute-0 affectionate_cannon[79819]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec 11 09:13:14 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:14 compute-0 systemd[1]: libpod-f2003814ca5b4863739692e7ab3b9fadeea93a7faba9398d8be04f8f44dff478.scope: Deactivated successfully.
Dec 11 09:13:14 compute-0 podman[79802]: 2025-12-11 09:13:14.794179612 +0000 UTC m=+0.634204751 container died f2003814ca5b4863739692e7ab3b9fadeea93a7faba9398d8be04f8f44dff478 (image=quay.io/ceph/ceph:v19, name=affectionate_cannon, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:13:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8ee5b775f41a77205da901f9c0b0fabaf432d97f8bb5fb2e33c841846a5280b-merged.mount: Deactivated successfully.
Dec 11 09:13:14 compute-0 podman[79802]: 2025-12-11 09:13:14.835587643 +0000 UTC m=+0.675612782 container remove f2003814ca5b4863739692e7ab3b9fadeea93a7faba9398d8be04f8f44dff478 (image=quay.io/ceph/ceph:v19, name=affectionate_cannon, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:13:14 compute-0 sudo[79909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 11 09:13:14 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-crash-compute-0[79900]: 2025-12-11T09:13:14.836+0000 7f9b0b825640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Dec 11 09:13:14 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-crash-compute-0[79900]: 2025-12-11T09:13:14.836+0000 7f9b0b825640 -1 AuthRegistry(0x7f9b040698f0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Dec 11 09:13:14 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-crash-compute-0[79900]: 2025-12-11T09:13:14.838+0000 7f9b0b825640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Dec 11 09:13:14 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-crash-compute-0[79900]: 2025-12-11T09:13:14.838+0000 7f9b0b825640 -1 AuthRegistry(0x7f9b0b823ff0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Dec 11 09:13:14 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-crash-compute-0[79900]: 2025-12-11T09:13:14.839+0000 7f9b0959a640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Dec 11 09:13:14 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-crash-compute-0[79900]: 2025-12-11T09:13:14.839+0000 7f9b0b825640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Dec 11 09:13:14 compute-0 sudo[79909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:14 compute-0 systemd[1]: libpod-conmon-f2003814ca5b4863739692e7ab3b9fadeea93a7faba9398d8be04f8f44dff478.scope: Deactivated successfully.
Dec 11 09:13:14 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-crash-compute-0[79900]: [errno 13] RADOS permission denied (error connecting to the cluster)
Dec 11 09:13:14 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-crash-compute-0[79900]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Dec 11 09:13:14 compute-0 sudo[79909]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:14 compute-0 sudo[79954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:13:14 compute-0 sudo[79954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:14 compute-0 sudo[79954]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:14 compute-0 sudo[79979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Dec 11 09:13:14 compute-0 sudo[79979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:15 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:13:15 compute-0 podman[80073]: 2025-12-11 09:13:15.58506556 +0000 UTC m=+0.076391643 container exec 9c08b5b1828392d2cc014af7ec9c415c9c915d4b6dd9798da9403e6929851bd4 (image=quay.io/ceph/ceph:v19, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mon-compute-0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:13:15 compute-0 podman[80073]: 2025-12-11 09:13:15.686832258 +0000 UTC m=+0.178158321 container exec_died 9c08b5b1828392d2cc014af7ec9c415c9c915d4b6dd9798da9403e6929851bd4 (image=quay.io/ceph/ceph:v19, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mon-compute-0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Dec 11 09:13:15 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:15 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:15 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:15 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:15 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:15 compute-0 ceph-mon[74426]: from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 11 09:13:15 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:15 compute-0 ceph-mon[74426]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:13:15 compute-0 sudo[79755]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:15 compute-0 sudo[79979]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:15 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 11 09:13:15 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:15 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 11 09:13:15 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:15 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 11 09:13:15 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:13:15 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 11 09:13:15 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 11 09:13:15 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 11 09:13:16 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:16 compute-0 sudo[80140]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 11 09:13:16 compute-0 ansible-async_wrapper.py[78693]: Done in kid B.
Dec 11 09:13:16 compute-0 sudo[80140]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:16 compute-0 sudo[80140]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:16 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0)
Dec 11 09:13:16 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:16 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0)
Dec 11 09:13:16 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:16 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0)
Dec 11 09:13:16 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:16 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0)
Dec 11 09:13:16 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:16 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Dec 11 09:13:16 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Dec 11 09:13:16 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec 11 09:13:16 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 11 09:13:16 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec 11 09:13:16 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 11 09:13:16 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 11 09:13:16 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:13:16 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Dec 11 09:13:16 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Dec 11 09:13:16 compute-0 ceph-mgr[74715]: [progress INFO root] Writing back 1 completed events
Dec 11 09:13:16 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 11 09:13:16 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:16 compute-0 sudo[80165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:13:16 compute-0 sudo[80165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:16 compute-0 sudo[80165]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:16 compute-0 sudo[80211]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evxdlcqyncdpjjujqkfuheyuyiaoeduw ; /usr/bin/python3'
Dec 11 09:13:16 compute-0 sudo[80211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:13:16 compute-0 sudo[80215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph:v19 --timeout 895 _orch deploy --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:13:16 compute-0 sudo[80215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:16 compute-0 python3[80216]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:13:16 compute-0 podman[80241]: 2025-12-11 09:13:16.403782508 +0000 UTC m=+0.052302794 container create 8907404d7a268688b367dba97afa97b05f81f189b1b06ece16c527ab9f3445b4 (image=quay.io/ceph/ceph:v19, name=exciting_clarke, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:13:16 compute-0 podman[80241]: 2025-12-11 09:13:16.384201421 +0000 UTC m=+0.032721737 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:13:16 compute-0 systemd[1]: Started libpod-conmon-8907404d7a268688b367dba97afa97b05f81f189b1b06ece16c527ab9f3445b4.scope.
Dec 11 09:13:16 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:13:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b6912d4ee0cb43f3ab55e5883a9ca4bbb9bf2780d923dfb7bbe39bdbc7fe004/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:13:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b6912d4ee0cb43f3ab55e5883a9ca4bbb9bf2780d923dfb7bbe39bdbc7fe004/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:13:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b6912d4ee0cb43f3ab55e5883a9ca4bbb9bf2780d923dfb7bbe39bdbc7fe004/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 11 09:13:16 compute-0 podman[80241]: 2025-12-11 09:13:16.576402059 +0000 UTC m=+0.224922365 container init 8907404d7a268688b367dba97afa97b05f81f189b1b06ece16c527ab9f3445b4 (image=quay.io/ceph/ceph:v19, name=exciting_clarke, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Dec 11 09:13:16 compute-0 podman[80241]: 2025-12-11 09:13:16.585533761 +0000 UTC m=+0.234054047 container start 8907404d7a268688b367dba97afa97b05f81f189b1b06ece16c527ab9f3445b4 (image=quay.io/ceph/ceph:v19, name=exciting_clarke, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:13:16 compute-0 podman[80241]: 2025-12-11 09:13:16.589505515 +0000 UTC m=+0.238025821 container attach 8907404d7a268688b367dba97afa97b05f81f189b1b06ece16c527ab9f3445b4 (image=quay.io/ceph/ceph:v19, name=exciting_clarke, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:13:16 compute-0 podman[80276]: 2025-12-11 09:13:16.635301196 +0000 UTC m=+0.041192294 container create b1f9607ea78c68c92d24839016d51052381e462e6b3d862ce640d5ad3f578575 (image=quay.io/ceph/ceph:v19, name=gallant_sinoussi, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:13:16 compute-0 systemd[1]: Started libpod-conmon-b1f9607ea78c68c92d24839016d51052381e462e6b3d862ce640d5ad3f578575.scope.
Dec 11 09:13:16 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:13:16 compute-0 podman[80276]: 2025-12-11 09:13:16.706388819 +0000 UTC m=+0.112279937 container init b1f9607ea78c68c92d24839016d51052381e462e6b3d862ce640d5ad3f578575 (image=quay.io/ceph/ceph:v19, name=gallant_sinoussi, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Dec 11 09:13:16 compute-0 podman[80276]: 2025-12-11 09:13:16.711909567 +0000 UTC m=+0.117800665 container start b1f9607ea78c68c92d24839016d51052381e462e6b3d862ce640d5ad3f578575 (image=quay.io/ceph/ceph:v19, name=gallant_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:13:16 compute-0 podman[80276]: 2025-12-11 09:13:16.617098396 +0000 UTC m=+0.022989514 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:13:16 compute-0 podman[80276]: 2025-12-11 09:13:16.715933904 +0000 UTC m=+0.121825002 container attach b1f9607ea78c68c92d24839016d51052381e462e6b3d862ce640d5ad3f578575 (image=quay.io/ceph/ceph:v19, name=gallant_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:13:16 compute-0 gallant_sinoussi[80292]: 167 167
Dec 11 09:13:16 compute-0 systemd[1]: libpod-b1f9607ea78c68c92d24839016d51052381e462e6b3d862ce640d5ad3f578575.scope: Deactivated successfully.
Dec 11 09:13:16 compute-0 podman[80276]: 2025-12-11 09:13:16.719362011 +0000 UTC m=+0.125253109 container died b1f9607ea78c68c92d24839016d51052381e462e6b3d862ce640d5ad3f578575 (image=quay.io/ceph/ceph:v19, name=gallant_sinoussi, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Dec 11 09:13:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-17177b4868780b684fa887e88cb7d5c64b9c3abf1bd6ad25dbc620d811ca74d4-merged.mount: Deactivated successfully.
Dec 11 09:13:16 compute-0 podman[80276]: 2025-12-11 09:13:16.765072738 +0000 UTC m=+0.170963836 container remove b1f9607ea78c68c92d24839016d51052381e462e6b3d862ce640d5ad3f578575 (image=quay.io/ceph/ceph:v19, name=gallant_sinoussi, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 11 09:13:16 compute-0 systemd[1]: libpod-conmon-b1f9607ea78c68c92d24839016d51052381e462e6b3d862ce640d5ad3f578575.scope: Deactivated successfully.
Dec 11 09:13:16 compute-0 sudo[80215]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:16 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 11 09:13:16 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:16 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 11 09:13:16 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:16 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.wwpcae (unknown last config time)...
Dec 11 09:13:16 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.wwpcae (unknown last config time)...
Dec 11 09:13:16 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.wwpcae", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec 11 09:13:16 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.wwpcae", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 11 09:13:16 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 11 09:13:16 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 11 09:13:16 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 11 09:13:16 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:13:16 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.wwpcae on compute-0
Dec 11 09:13:16 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.wwpcae on compute-0
Dec 11 09:13:16 compute-0 sudo[80329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:13:16 compute-0 sudo[80329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:16 compute-0 sudo[80329]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:16 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0)
Dec 11 09:13:16 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3403015312' entity='client.admin' 
Dec 11 09:13:16 compute-0 sudo[80354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph:v19 --timeout 895 _orch deploy --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:13:16 compute-0 sudo[80354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:16 compute-0 systemd[1]: libpod-8907404d7a268688b367dba97afa97b05f81f189b1b06ece16c527ab9f3445b4.scope: Deactivated successfully.
Dec 11 09:13:16 compute-0 podman[80241]: 2025-12-11 09:13:16.957491925 +0000 UTC m=+0.606012211 container died 8907404d7a268688b367dba97afa97b05f81f189b1b06ece16c527ab9f3445b4 (image=quay.io/ceph/ceph:v19, name=exciting_clarke, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec 11 09:13:16 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:16 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:16 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:13:16 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 11 09:13:16 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:16 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:16 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:16 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:16 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:16 compute-0 ceph-mon[74426]: Reconfiguring mon.compute-0 (unknown last config time)...
Dec 11 09:13:16 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 11 09:13:16 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 11 09:13:16 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:13:16 compute-0 ceph-mon[74426]: Reconfiguring daemon mon.compute-0 on compute-0
Dec 11 09:13:16 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:16 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:16 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:16 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.wwpcae", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 11 09:13:16 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 11 09:13:16 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:13:16 compute-0 ceph-mon[74426]: from='client.? 192.168.122.100:0/3403015312' entity='client.admin' 
Dec 11 09:13:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b6912d4ee0cb43f3ab55e5883a9ca4bbb9bf2780d923dfb7bbe39bdbc7fe004-merged.mount: Deactivated successfully.
Dec 11 09:13:17 compute-0 podman[80241]: 2025-12-11 09:13:17.019019341 +0000 UTC m=+0.667539637 container remove 8907404d7a268688b367dba97afa97b05f81f189b1b06ece16c527ab9f3445b4 (image=quay.io/ceph/ceph:v19, name=exciting_clarke, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:13:17 compute-0 systemd[1]: libpod-conmon-8907404d7a268688b367dba97afa97b05f81f189b1b06ece16c527ab9f3445b4.scope: Deactivated successfully.
Dec 11 09:13:17 compute-0 sudo[80211]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:17 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:13:17 compute-0 sudo[80424]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwawkidacbowcfbohuqxwicszkkygcun ; /usr/bin/python3'
Dec 11 09:13:17 compute-0 sudo[80424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:13:17 compute-0 podman[80436]: 2025-12-11 09:13:17.311684263 +0000 UTC m=+0.047210689 container create 8f42423eedae77b790d7f9eacc2633f3828f7b7485f9883bd6312b5c89e38135 (image=quay.io/ceph/ceph:v19, name=practical_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 11 09:13:17 compute-0 systemd[1]: Started libpod-conmon-8f42423eedae77b790d7f9eacc2633f3828f7b7485f9883bd6312b5c89e38135.scope.
Dec 11 09:13:17 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:13:17 compute-0 python3[80431]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:13:17 compute-0 podman[80436]: 2025-12-11 09:13:17.385036003 +0000 UTC m=+0.120562449 container init 8f42423eedae77b790d7f9eacc2633f3828f7b7485f9883bd6312b5c89e38135 (image=quay.io/ceph/ceph:v19, name=practical_williamson, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:13:17 compute-0 podman[80436]: 2025-12-11 09:13:17.291988973 +0000 UTC m=+0.027515439 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:13:17 compute-0 podman[80436]: 2025-12-11 09:13:17.392099404 +0000 UTC m=+0.127625830 container start 8f42423eedae77b790d7f9eacc2633f3828f7b7485f9883bd6312b5c89e38135 (image=quay.io/ceph/ceph:v19, name=practical_williamson, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 11 09:13:17 compute-0 practical_williamson[80453]: 167 167
Dec 11 09:13:17 compute-0 podman[80436]: 2025-12-11 09:13:17.395669245 +0000 UTC m=+0.131195691 container attach 8f42423eedae77b790d7f9eacc2633f3828f7b7485f9883bd6312b5c89e38135 (image=quay.io/ceph/ceph:v19, name=practical_williamson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:13:17 compute-0 systemd[1]: libpod-8f42423eedae77b790d7f9eacc2633f3828f7b7485f9883bd6312b5c89e38135.scope: Deactivated successfully.
Dec 11 09:13:17 compute-0 podman[80436]: 2025-12-11 09:13:17.39697518 +0000 UTC m=+0.132501606 container died 8f42423eedae77b790d7f9eacc2633f3828f7b7485f9883bd6312b5c89e38135 (image=quay.io/ceph/ceph:v19, name=practical_williamson, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 11 09:13:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-1cedc3b5b37e14312e32e69d475b8e825ea194a71dce4377a48209823de07069-merged.mount: Deactivated successfully.
Dec 11 09:13:17 compute-0 podman[80436]: 2025-12-11 09:13:17.438045139 +0000 UTC m=+0.173571565 container remove 8f42423eedae77b790d7f9eacc2633f3828f7b7485f9883bd6312b5c89e38135 (image=quay.io/ceph/ceph:v19, name=practical_williamson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:13:17 compute-0 systemd[1]: libpod-conmon-8f42423eedae77b790d7f9eacc2633f3828f7b7485f9883bd6312b5c89e38135.scope: Deactivated successfully.
Dec 11 09:13:17 compute-0 podman[80456]: 2025-12-11 09:13:17.451975733 +0000 UTC m=+0.056833146 container create 7cf9b0964f983f55ecb70add8a284f469d61e1d78fb6f634174f4f66dee5967c (image=quay.io/ceph/ceph:v19, name=agitated_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec 11 09:13:17 compute-0 systemd[1]: Started libpod-conmon-7cf9b0964f983f55ecb70add8a284f469d61e1d78fb6f634174f4f66dee5967c.scope.
Dec 11 09:13:17 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:13:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3821c71f397d5aa407b84b12d72de49dfaca2776dfe581c5239c572eaaba81b6/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 11 09:13:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3821c71f397d5aa407b84b12d72de49dfaca2776dfe581c5239c572eaaba81b6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:13:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3821c71f397d5aa407b84b12d72de49dfaca2776dfe581c5239c572eaaba81b6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:13:17 compute-0 sudo[80354]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:17 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 11 09:13:17 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:17 compute-0 podman[80456]: 2025-12-11 09:13:17.504074389 +0000 UTC m=+0.108931822 container init 7cf9b0964f983f55ecb70add8a284f469d61e1d78fb6f634174f4f66dee5967c (image=quay.io/ceph/ceph:v19, name=agitated_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Dec 11 09:13:17 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 11 09:13:17 compute-0 podman[80456]: 2025-12-11 09:13:17.51026836 +0000 UTC m=+0.115125773 container start 7cf9b0964f983f55ecb70add8a284f469d61e1d78fb6f634174f4f66dee5967c (image=quay.io/ceph/ceph:v19, name=agitated_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:13:17 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:17 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 11 09:13:17 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:13:17 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 11 09:13:17 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 11 09:13:17 compute-0 podman[80456]: 2025-12-11 09:13:17.516701399 +0000 UTC m=+0.121558832 container attach 7cf9b0964f983f55ecb70add8a284f469d61e1d78fb6f634174f4f66dee5967c (image=quay.io/ceph/ceph:v19, name=agitated_williamson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Dec 11 09:13:17 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 11 09:13:17 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:17 compute-0 podman[80456]: 2025-12-11 09:13:17.430971358 +0000 UTC m=+0.035828771 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:13:17 compute-0 sudo[80490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 11 09:13:17 compute-0 sudo[80490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:17 compute-0 sudo[80490]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:17 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0)
Dec 11 09:13:17 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1690699852' entity='client.admin' 
Dec 11 09:13:17 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 11 09:13:17 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:13:17 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 11 09:13:17 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 11 09:13:17 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 11 09:13:17 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:17 compute-0 systemd[1]: libpod-7cf9b0964f983f55ecb70add8a284f469d61e1d78fb6f634174f4f66dee5967c.scope: Deactivated successfully.
Dec 11 09:13:17 compute-0 podman[80456]: 2025-12-11 09:13:17.904621158 +0000 UTC m=+0.509478571 container died 7cf9b0964f983f55ecb70add8a284f469d61e1d78fb6f634174f4f66dee5967c (image=quay.io/ceph/ceph:v19, name=agitated_williamson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:13:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-3821c71f397d5aa407b84b12d72de49dfaca2776dfe581c5239c572eaaba81b6-merged.mount: Deactivated successfully.
Dec 11 09:13:17 compute-0 podman[80456]: 2025-12-11 09:13:17.945452048 +0000 UTC m=+0.550309461 container remove 7cf9b0964f983f55ecb70add8a284f469d61e1d78fb6f634174f4f66dee5967c (image=quay.io/ceph/ceph:v19, name=agitated_williamson, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 11 09:13:17 compute-0 systemd[1]: libpod-conmon-7cf9b0964f983f55ecb70add8a284f469d61e1d78fb6f634174f4f66dee5967c.scope: Deactivated successfully.
Dec 11 09:13:17 compute-0 sudo[80537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 11 09:13:17 compute-0 sudo[80424]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:17 compute-0 sudo[80537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:17 compute-0 sudo[80537]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:18 compute-0 sudo[80597]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztpjhdtqfrvnftfaciofpdiawoqjozar ; /usr/bin/python3'
Dec 11 09:13:18 compute-0 sudo[80597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:13:18 compute-0 python3[80599]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:13:18 compute-0 podman[80600]: 2025-12-11 09:13:18.422903628 +0000 UTC m=+0.062649337 container create 15622ff1cab738ac8a814208380ed42f86b263419e9a7fd03421713b677568a6 (image=quay.io/ceph/ceph:v19, name=modest_volhard, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:13:18 compute-0 systemd[1]: Started libpod-conmon-15622ff1cab738ac8a814208380ed42f86b263419e9a7fd03421713b677568a6.scope.
Dec 11 09:13:18 compute-0 podman[80600]: 2025-12-11 09:13:18.38306883 +0000 UTC m=+0.022814569 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:13:18 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:13:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63f987d84e114c08de4a025776688162a66be8a1c0158d20b6d68cf7cd7d9f69/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:13:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63f987d84e114c08de4a025776688162a66be8a1c0158d20b6d68cf7cd7d9f69/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:13:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63f987d84e114c08de4a025776688162a66be8a1c0158d20b6d68cf7cd7d9f69/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 11 09:13:18 compute-0 podman[80600]: 2025-12-11 09:13:18.501541346 +0000 UTC m=+0.141287085 container init 15622ff1cab738ac8a814208380ed42f86b263419e9a7fd03421713b677568a6 (image=quay.io/ceph/ceph:v19, name=modest_volhard, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec 11 09:13:18 compute-0 ceph-mon[74426]: Reconfiguring mgr.compute-0.wwpcae (unknown last config time)...
Dec 11 09:13:18 compute-0 ceph-mon[74426]: Reconfiguring daemon mgr.compute-0.wwpcae on compute-0
Dec 11 09:13:18 compute-0 ceph-mon[74426]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:13:18 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:18 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:18 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:13:18 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 11 09:13:18 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:18 compute-0 ceph-mon[74426]: from='client.? 192.168.122.100:0/1690699852' entity='client.admin' 
Dec 11 09:13:18 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:13:18 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 11 09:13:18 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:18 compute-0 podman[80600]: 2025-12-11 09:13:18.510980048 +0000 UTC m=+0.150725757 container start 15622ff1cab738ac8a814208380ed42f86b263419e9a7fd03421713b677568a6 (image=quay.io/ceph/ceph:v19, name=modest_volhard, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:13:18 compute-0 podman[80600]: 2025-12-11 09:13:18.515805832 +0000 UTC m=+0.155551561 container attach 15622ff1cab738ac8a814208380ed42f86b263419e9a7fd03421713b677568a6 (image=quay.io/ceph/ceph:v19, name=modest_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Dec 11 09:13:18 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:13:18 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0)
Dec 11 09:13:18 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2373786779' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Dec 11 09:13:19 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:13:19 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Dec 11 09:13:19 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 11 09:13:19 compute-0 ceph-mon[74426]: from='client.? 192.168.122.100:0/2373786779' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Dec 11 09:13:19 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2373786779' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Dec 11 09:13:19 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Dec 11 09:13:19 compute-0 modest_volhard[80615]: set require_min_compat_client to mimic
Dec 11 09:13:19 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Dec 11 09:13:19 compute-0 systemd[1]: libpod-15622ff1cab738ac8a814208380ed42f86b263419e9a7fd03421713b677568a6.scope: Deactivated successfully.
Dec 11 09:13:19 compute-0 podman[80600]: 2025-12-11 09:13:19.542003669 +0000 UTC m=+1.181749378 container died 15622ff1cab738ac8a814208380ed42f86b263419e9a7fd03421713b677568a6 (image=quay.io/ceph/ceph:v19, name=modest_volhard, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:13:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-63f987d84e114c08de4a025776688162a66be8a1c0158d20b6d68cf7cd7d9f69-merged.mount: Deactivated successfully.
Dec 11 09:13:19 compute-0 podman[80600]: 2025-12-11 09:13:19.576995991 +0000 UTC m=+1.216741700 container remove 15622ff1cab738ac8a814208380ed42f86b263419e9a7fd03421713b677568a6 (image=quay.io/ceph/ceph:v19, name=modest_volhard, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:13:19 compute-0 systemd[1]: libpod-conmon-15622ff1cab738ac8a814208380ed42f86b263419e9a7fd03421713b677568a6.scope: Deactivated successfully.
Dec 11 09:13:19 compute-0 sudo[80597]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:20 compute-0 sudo[80674]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tltoemwlrijsbqefivuroztveclhrxnk ; /usr/bin/python3'
Dec 11 09:13:20 compute-0 sudo[80674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:13:20 compute-0 python3[80676]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:13:20 compute-0 podman[80677]: 2025-12-11 09:13:20.34497097 +0000 UTC m=+0.037799959 container create 16915842c14b25a545d818aeaad869c4f0be474e2dbaf8e6c41c90db0f481271 (image=quay.io/ceph/ceph:v19, name=zealous_dewdney, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:13:20 compute-0 systemd[1]: Started libpod-conmon-16915842c14b25a545d818aeaad869c4f0be474e2dbaf8e6c41c90db0f481271.scope.
Dec 11 09:13:20 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:13:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dc1251e941ccb742b8b11a151338e1c4aef44d13f57ef375bdaf22f2af57fe3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:13:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dc1251e941ccb742b8b11a151338e1c4aef44d13f57ef375bdaf22f2af57fe3/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 11 09:13:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dc1251e941ccb742b8b11a151338e1c4aef44d13f57ef375bdaf22f2af57fe3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:13:20 compute-0 podman[80677]: 2025-12-11 09:13:20.409795259 +0000 UTC m=+0.102624268 container init 16915842c14b25a545d818aeaad869c4f0be474e2dbaf8e6c41c90db0f481271 (image=quay.io/ceph/ceph:v19, name=zealous_dewdney, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:13:20 compute-0 podman[80677]: 2025-12-11 09:13:20.416972893 +0000 UTC m=+0.109801882 container start 16915842c14b25a545d818aeaad869c4f0be474e2dbaf8e6c41c90db0f481271 (image=quay.io/ceph/ceph:v19, name=zealous_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True)
Dec 11 09:13:20 compute-0 podman[80677]: 2025-12-11 09:13:20.423101332 +0000 UTC m=+0.115930341 container attach 16915842c14b25a545d818aeaad869c4f0be474e2dbaf8e6c41c90db0f481271 (image=quay.io/ceph/ceph:v19, name=zealous_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 11 09:13:20 compute-0 podman[80677]: 2025-12-11 09:13:20.328607462 +0000 UTC m=+0.021436441 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:13:20 compute-0 ceph-mon[74426]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:13:20 compute-0 ceph-mon[74426]: from='client.? 192.168.122.100:0/2373786779' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Dec 11 09:13:20 compute-0 ceph-mon[74426]: osdmap e3: 0 total, 0 up, 0 in
Dec 11 09:13:20 compute-0 ceph-mgr[74715]: log_channel(audit) log [DBG] : from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 11 09:13:20 compute-0 sudo[80716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:13:20 compute-0 sudo[80716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:20 compute-0 sudo[80716]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:20 compute-0 sudo[80741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host --expect-hostname compute-0
Dec 11 09:13:20 compute-0 sudo[80741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:21 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:13:21 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] scanning for idle connections..
Dec 11 09:13:21 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] cleaning up connections: []
Dec 11 09:13:21 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] scanning for idle connections..
Dec 11 09:13:21 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] cleaning up connections: []
Dec 11 09:13:21 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] scanning for idle connections..
Dec 11 09:13:21 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] cleaning up connections: []
Dec 11 09:13:21 compute-0 sudo[80741]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:21 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 11 09:13:21 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:21 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 11 09:13:21 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:21 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 11 09:13:21 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:21 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 11 09:13:21 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:21 compute-0 ceph-mgr[74715]: [cephadm INFO root] Added host compute-0
Dec 11 09:13:21 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Added host compute-0
Dec 11 09:13:21 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 11 09:13:21 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:13:21 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 11 09:13:21 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 11 09:13:21 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 11 09:13:21 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:21 compute-0 sudo[80786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 11 09:13:21 compute-0 sudo[80786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:21 compute-0 sudo[80786]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:22 compute-0 ceph-mon[74426]: from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 11 09:13:22 compute-0 ceph-mon[74426]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:13:22 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:22 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:22 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:22 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:22 compute-0 ceph-mon[74426]: Added host compute-0
Dec 11 09:13:22 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:13:22 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 11 09:13:22 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:22 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-1
Dec 11 09:13:22 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-1
Dec 11 09:13:23 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:13:23 compute-0 ceph-mon[74426]: Deploying cephadm binary to compute-1
Dec 11 09:13:23 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:13:24 compute-0 ceph-mon[74426]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:13:25 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:13:26 compute-0 ceph-mon[74426]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:13:26 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 11 09:13:26 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:26 compute-0 ceph-mgr[74715]: [cephadm INFO root] Added host compute-1
Dec 11 09:13:26 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Added host compute-1
Dec 11 09:13:26 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 11 09:13:26 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:27 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:13:27 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 11 09:13:27 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:27 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:27 compute-0 ceph-mon[74426]: Added host compute-1
Dec 11 09:13:27 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:27 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:27 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-2
Dec 11 09:13:27 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-2
Dec 11 09:13:28 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 11 09:13:28 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:28 compute-0 ceph-mon[74426]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:13:28 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:28 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:13:29 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:13:29 compute-0 ceph-mon[74426]: Deploying cephadm binary to compute-2
Dec 11 09:13:30 compute-0 ceph-mon[74426]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:13:31 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:13:31 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 11 09:13:31 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:31 compute-0 ceph-mgr[74715]: [cephadm INFO root] Added host compute-2
Dec 11 09:13:31 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Added host compute-2
Dec 11 09:13:31 compute-0 ceph-mgr[74715]: [cephadm INFO root] Saving service mon spec with placement compute-0;compute-1;compute-2
Dec 11 09:13:31 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0;compute-1;compute-2
Dec 11 09:13:31 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 11 09:13:31 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:31 compute-0 ceph-mgr[74715]: [cephadm INFO root] Saving service mgr spec with placement compute-0;compute-1;compute-2
Dec 11 09:13:31 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0;compute-1;compute-2
Dec 11 09:13:31 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 11 09:13:31 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:31 compute-0 ceph-mgr[74715]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Dec 11 09:13:31 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Dec 11 09:13:31 compute-0 ceph-mgr[74715]: [cephadm INFO root] Marking host: compute-1 for OSDSpec preview refresh.
Dec 11 09:13:31 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Marking host: compute-1 for OSDSpec preview refresh.
Dec 11 09:13:31 compute-0 ceph-mgr[74715]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Dec 11 09:13:31 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Dec 11 09:13:31 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0)
Dec 11 09:13:31 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:31 compute-0 zealous_dewdney[80692]: Added host 'compute-0' with addr '192.168.122.100'
Dec 11 09:13:31 compute-0 zealous_dewdney[80692]: Added host 'compute-1' with addr '192.168.122.101'
Dec 11 09:13:31 compute-0 zealous_dewdney[80692]: Added host 'compute-2' with addr '192.168.122.102'
Dec 11 09:13:31 compute-0 zealous_dewdney[80692]: Scheduled mon update...
Dec 11 09:13:31 compute-0 zealous_dewdney[80692]: Scheduled mgr update...
Dec 11 09:13:31 compute-0 zealous_dewdney[80692]: Scheduled osd.default_drive_group update...
Dec 11 09:13:31 compute-0 systemd[1]: libpod-16915842c14b25a545d818aeaad869c4f0be474e2dbaf8e6c41c90db0f481271.scope: Deactivated successfully.
Dec 11 09:13:31 compute-0 podman[80677]: 2025-12-11 09:13:31.369166568 +0000 UTC m=+11.061995617 container died 16915842c14b25a545d818aeaad869c4f0be474e2dbaf8e6c41c90db0f481271 (image=quay.io/ceph/ceph:v19, name=zealous_dewdney, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Dec 11 09:13:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-0dc1251e941ccb742b8b11a151338e1c4aef44d13f57ef375bdaf22f2af57fe3-merged.mount: Deactivated successfully.
Dec 11 09:13:31 compute-0 podman[80677]: 2025-12-11 09:13:31.417782335 +0000 UTC m=+11.110611324 container remove 16915842c14b25a545d818aeaad869c4f0be474e2dbaf8e6c41c90db0f481271 (image=quay.io/ceph/ceph:v19, name=zealous_dewdney, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True)
Dec 11 09:13:31 compute-0 systemd[1]: libpod-conmon-16915842c14b25a545d818aeaad869c4f0be474e2dbaf8e6c41c90db0f481271.scope: Deactivated successfully.
Dec 11 09:13:31 compute-0 sudo[80674]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:31 compute-0 sudo[80849]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvgjvicjwamrwtvjfnjwcrrtuhrbybam ; /usr/bin/python3'
Dec 11 09:13:31 compute-0 sudo[80849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:13:31 compute-0 python3[80851]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:13:31 compute-0 podman[80853]: 2025-12-11 09:13:31.935705762 +0000 UTC m=+0.044630221 container create e9aef138801e579faccbce69334b379fe3ee325d075044417bf2c3637c5e3de8 (image=quay.io/ceph/ceph:v19, name=strange_chaum, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 11 09:13:31 compute-0 systemd[1]: Started libpod-conmon-e9aef138801e579faccbce69334b379fe3ee325d075044417bf2c3637c5e3de8.scope.
Dec 11 09:13:31 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:13:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c3436b577932fbfcd6f17fe46bcbdbea7f264d5c9f697f924faeb71080f5840/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:13:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c3436b577932fbfcd6f17fe46bcbdbea7f264d5c9f697f924faeb71080f5840/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 11 09:13:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c3436b577932fbfcd6f17fe46bcbdbea7f264d5c9f697f924faeb71080f5840/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:13:32 compute-0 podman[80853]: 2025-12-11 09:13:31.916062313 +0000 UTC m=+0.024986792 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:13:32 compute-0 podman[80853]: 2025-12-11 09:13:32.077180672 +0000 UTC m=+0.186105131 container init e9aef138801e579faccbce69334b379fe3ee325d075044417bf2c3637c5e3de8 (image=quay.io/ceph/ceph:v19, name=strange_chaum, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 11 09:13:32 compute-0 podman[80853]: 2025-12-11 09:13:32.085573238 +0000 UTC m=+0.194497697 container start e9aef138801e579faccbce69334b379fe3ee325d075044417bf2c3637c5e3de8 (image=quay.io/ceph/ceph:v19, name=strange_chaum, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:13:32 compute-0 podman[80853]: 2025-12-11 09:13:32.269782696 +0000 UTC m=+0.378707165 container attach e9aef138801e579faccbce69334b379fe3ee325d075044417bf2c3637c5e3de8 (image=quay.io/ceph/ceph:v19, name=strange_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:13:32 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec 11 09:13:32 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1431029758' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 11 09:13:32 compute-0 strange_chaum[80869]: 
Dec 11 09:13:32 compute-0 strange_chaum[80869]: {"fsid":"31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":63,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2025-12-11T09:12:26:191373+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-12-11T09:12:26.198210+0000","services":{}},"progress_events":{}}
Dec 11 09:13:32 compute-0 ceph-mon[74426]: pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:13:32 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:32 compute-0 ceph-mon[74426]: Added host compute-2
Dec 11 09:13:32 compute-0 ceph-mon[74426]: Saving service mon spec with placement compute-0;compute-1;compute-2
Dec 11 09:13:32 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:32 compute-0 ceph-mon[74426]: Saving service mgr spec with placement compute-0;compute-1;compute-2
Dec 11 09:13:32 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:32 compute-0 ceph-mon[74426]: Marking host: compute-0 for OSDSpec preview refresh.
Dec 11 09:13:32 compute-0 ceph-mon[74426]: Marking host: compute-1 for OSDSpec preview refresh.
Dec 11 09:13:32 compute-0 ceph-mon[74426]: Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Dec 11 09:13:32 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:32 compute-0 systemd[1]: libpod-e9aef138801e579faccbce69334b379fe3ee325d075044417bf2c3637c5e3de8.scope: Deactivated successfully.
Dec 11 09:13:32 compute-0 podman[80853]: 2025-12-11 09:13:32.555944316 +0000 UTC m=+0.664868775 container died e9aef138801e579faccbce69334b379fe3ee325d075044417bf2c3637c5e3de8 (image=quay.io/ceph/ceph:v19, name=strange_chaum, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325)
Dec 11 09:13:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c3436b577932fbfcd6f17fe46bcbdbea7f264d5c9f697f924faeb71080f5840-merged.mount: Deactivated successfully.
Dec 11 09:13:33 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:13:33 compute-0 podman[80853]: 2025-12-11 09:13:33.062190136 +0000 UTC m=+1.171114595 container remove e9aef138801e579faccbce69334b379fe3ee325d075044417bf2c3637c5e3de8 (image=quay.io/ceph/ceph:v19, name=strange_chaum, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:13:33 compute-0 sudo[80849]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:33 compute-0 systemd[1]: libpod-conmon-e9aef138801e579faccbce69334b379fe3ee325d075044417bf2c3637c5e3de8.scope: Deactivated successfully.
Dec 11 09:13:33 compute-0 ceph-mon[74426]: from='client.? 192.168.122.100:0/1431029758' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 11 09:13:33 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:13:34 compute-0 ceph-mon[74426]: pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:13:35 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:13:36 compute-0 ceph-mon[74426]: pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:13:37 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:13:38 compute-0 ceph-mon[74426]: pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:13:38 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:13:39 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:13:39 compute-0 ceph-mon[74426]: pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:13:41 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:13:42 compute-0 ceph-mon[74426]: pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:13:43 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:13:43 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:13:44 compute-0 ceph-mon[74426]: pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:13:45 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:13:46 compute-0 ceph-mon[74426]: pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:13:47 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:13:48 compute-0 ceph-mon[74426]: pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:13:48 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:13:49 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:13:50 compute-0 ceph-mon[74426]: pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:13:51 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:13:51 compute-0 ceph-mgr[74715]: [balancer INFO root] Optimize plan auto_2025-12-11_09:13:51
Dec 11 09:13:51 compute-0 ceph-mgr[74715]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 11 09:13:51 compute-0 ceph-mgr[74715]: [balancer INFO root] do_upmap
Dec 11 09:13:51 compute-0 ceph-mgr[74715]: [balancer INFO root] No pools available
Dec 11 09:13:51 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] _maybe_adjust
Dec 11 09:13:51 compute-0 ceph-mgr[74715]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 11 09:13:51 compute-0 ceph-mgr[74715]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 11 09:13:51 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] scanning for idle connections..
Dec 11 09:13:51 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] cleaning up connections: []
Dec 11 09:13:51 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] scanning for idle connections..
Dec 11 09:13:51 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] cleaning up connections: []
Dec 11 09:13:51 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] scanning for idle connections..
Dec 11 09:13:51 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] cleaning up connections: []
Dec 11 09:13:52 compute-0 ceph-mon[74426]: pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:13:53 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:13:53 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:13:54 compute-0 ceph-mon[74426]: pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:13:55 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:13:56 compute-0 ceph-mon[74426]: pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:13:57 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:13:58 compute-0 ceph-mon[74426]: pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:13:58 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 11 09:13:58 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:58 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 11 09:13:58 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:58 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 11 09:13:58 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:58 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 11 09:13:58 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:58 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Dec 11 09:13:58 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 11 09:13:58 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 11 09:13:58 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:13:58 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 11 09:13:58 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 11 09:13:58 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Dec 11 09:13:58 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Dec 11 09:13:58 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:13:59 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:13:59 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf
Dec 11 09:13:59 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf
Dec 11 09:13:59 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:59 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:59 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:59 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:13:59 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 11 09:13:59 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:13:59 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 11 09:13:59 compute-0 ceph-mon[74426]: Updating compute-1:/etc/ceph/ceph.conf
Dec 11 09:13:59 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec 11 09:13:59 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec 11 09:14:00 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring
Dec 11 09:14:00 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring
Dec 11 09:14:00 compute-0 ceph-mon[74426]: pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:14:00 compute-0 ceph-mon[74426]: Updating compute-1:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf
Dec 11 09:14:00 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 11 09:14:00 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:00 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 11 09:14:00 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:00 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 11 09:14:00 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:00 compute-0 ceph-mgr[74715]: [cephadm ERROR cephadm.serve] Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                           service_name: mon
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Dec 11 09:14:00 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                           service_name: mon
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Dec 11 09:14:00 compute-0 ceph-mgr[74715]: [cephadm ERROR cephadm.serve] Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                           service_name: mgr
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Dec 11 09:14:00 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:14:00.680+0000 7f8d269c7640 -1 log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
Dec 11 09:14:00 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: service_name: mon
Dec 11 09:14:00 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: placement:
Dec 11 09:14:00 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]:   hosts:
Dec 11 09:14:00 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]:   - compute-0
Dec 11 09:14:00 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]:   - compute-1
Dec 11 09:14:00 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]:   - compute-2
Dec 11 09:14:00 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Dec 11 09:14:00 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:14:00.681+0000 7f8d269c7640 -1 log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
Dec 11 09:14:00 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: service_name: mgr
Dec 11 09:14:00 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: placement:
Dec 11 09:14:00 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]:   hosts:
Dec 11 09:14:00 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]:   - compute-0
Dec 11 09:14:00 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]:   - compute-1
Dec 11 09:14:00 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]:   - compute-2
Dec 11 09:14:00 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Dec 11 09:14:00 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                           service_name: mgr
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Dec 11 09:14:00 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:14:00 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:14:00 compute-0 ceph-mgr[74715]: [progress INFO root] update: starting ev 0ed8d95b-98e3-4985-8f0a-734148cfc7f1 (Updating crash deployment (+1 -> 2))
Dec 11 09:14:00 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec 11 09:14:00 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 11 09:14:00 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec 11 09:14:00 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 11 09:14:00 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:14:00 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-1 on compute-1
Dec 11 09:14:00 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-1 on compute-1
Dec 11 09:14:01 compute-0 ceph-mon[74426]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec 11 09:14:01 compute-0 ceph-mon[74426]: Updating compute-1:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring
Dec 11 09:14:01 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:01 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:01 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:01 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 11 09:14:01 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec 11 09:14:01 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:14:01 compute-0 ceph-mon[74426]: log_channel(cluster) log [WRN] : Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Dec 11 09:14:02 compute-0 ceph-mon[74426]: Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                           service_name: mon
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Dec 11 09:14:02 compute-0 ceph-mon[74426]: Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                           service_name: mgr
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Dec 11 09:14:02 compute-0 ceph-mon[74426]: pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:14:02 compute-0 ceph-mon[74426]: pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:14:02 compute-0 ceph-mon[74426]: Deploying daemon crash.compute-1 on compute-1
Dec 11 09:14:02 compute-0 ceph-mon[74426]: Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Dec 11 09:14:02 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:14:03 compute-0 sudo[80929]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpnylnzhtcxrdrjgnemfyorrxrqooohv ; /usr/bin/python3'
Dec 11 09:14:03 compute-0 sudo[80929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:14:03 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 11 09:14:03 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:03 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 11 09:14:03 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:03 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec 11 09:14:03 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:03 compute-0 ceph-mgr[74715]: [progress INFO root] complete: finished ev 0ed8d95b-98e3-4985-8f0a-734148cfc7f1 (Updating crash deployment (+1 -> 2))
Dec 11 09:14:03 compute-0 ceph-mgr[74715]: [progress INFO root] Completed event 0ed8d95b-98e3-4985-8f0a-734148cfc7f1 (Updating crash deployment (+1 -> 2)) in 3 seconds
Dec 11 09:14:03 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec 11 09:14:03 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:03 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 11 09:14:03 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 11 09:14:03 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 11 09:14:03 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 11 09:14:03 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 11 09:14:03 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:14:03 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 11 09:14:03 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 11 09:14:03 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 11 09:14:03 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:14:03 compute-0 python3[80931]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:14:03 compute-0 sudo[80932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:14:03 compute-0 sudo[80932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:14:03 compute-0 sudo[80932]: pam_unix(sudo:session): session closed for user root
Dec 11 09:14:03 compute-0 podman[80939]: 2025-12-11 09:14:03.473398186 +0000 UTC m=+0.061607988 container create 4b3e91cb616bbbc244c589926b9654af67cadb4190fe81bbadd014ca1698bbe2 (image=quay.io/ceph/ceph:v19, name=thirsty_bohr, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:14:03 compute-0 systemd[1]: Started libpod-conmon-4b3e91cb616bbbc244c589926b9654af67cadb4190fe81bbadd014ca1698bbe2.scope.
Dec 11 09:14:03 compute-0 sudo[80969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 11 09:14:03 compute-0 sudo[80969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:14:03 compute-0 podman[80939]: 2025-12-11 09:14:03.456263117 +0000 UTC m=+0.044472949 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:14:03 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:14:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8efb4298fdb6839f6bcdad6247ae68ffaf6bc836a0a76efa4eeed63a774e4d16/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8efb4298fdb6839f6bcdad6247ae68ffaf6bc836a0a76efa4eeed63a774e4d16/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8efb4298fdb6839f6bcdad6247ae68ffaf6bc836a0a76efa4eeed63a774e4d16/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:03 compute-0 podman[80939]: 2025-12-11 09:14:03.568406994 +0000 UTC m=+0.156616826 container init 4b3e91cb616bbbc244c589926b9654af67cadb4190fe81bbadd014ca1698bbe2 (image=quay.io/ceph/ceph:v19, name=thirsty_bohr, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:14:03 compute-0 podman[80939]: 2025-12-11 09:14:03.574632729 +0000 UTC m=+0.162842541 container start 4b3e91cb616bbbc244c589926b9654af67cadb4190fe81bbadd014ca1698bbe2 (image=quay.io/ceph/ceph:v19, name=thirsty_bohr, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 11 09:14:03 compute-0 podman[80939]: 2025-12-11 09:14:03.578677897 +0000 UTC m=+0.166887709 container attach 4b3e91cb616bbbc244c589926b9654af67cadb4190fe81bbadd014ca1698bbe2 (image=quay.io/ceph/ceph:v19, name=thirsty_bohr, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:14:03 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:14:03 compute-0 podman[81063]: 2025-12-11 09:14:03.919564197 +0000 UTC m=+0.043476758 container create f4d749d78ded722c870bd498263ff862c6d321161ac6b217b3f1c59a40c4bee7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_snyder, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 11 09:14:03 compute-0 systemd[1]: Started libpod-conmon-f4d749d78ded722c870bd498263ff862c6d321161ac6b217b3f1c59a40c4bee7.scope.
Dec 11 09:14:03 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:14:03 compute-0 podman[81063]: 2025-12-11 09:14:03.977565642 +0000 UTC m=+0.101478233 container init f4d749d78ded722c870bd498263ff862c6d321161ac6b217b3f1c59a40c4bee7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:14:03 compute-0 podman[81063]: 2025-12-11 09:14:03.984346204 +0000 UTC m=+0.108258765 container start f4d749d78ded722c870bd498263ff862c6d321161ac6b217b3f1c59a40c4bee7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_snyder, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 11 09:14:03 compute-0 keen_snyder[81079]: 167 167
Dec 11 09:14:03 compute-0 podman[81063]: 2025-12-11 09:14:03.988477295 +0000 UTC m=+0.112389866 container attach f4d749d78ded722c870bd498263ff862c6d321161ac6b217b3f1c59a40c4bee7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_snyder, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec 11 09:14:03 compute-0 systemd[1]: libpod-f4d749d78ded722c870bd498263ff862c6d321161ac6b217b3f1c59a40c4bee7.scope: Deactivated successfully.
Dec 11 09:14:03 compute-0 podman[81063]: 2025-12-11 09:14:03.990644963 +0000 UTC m=+0.114557524 container died f4d749d78ded722c870bd498263ff862c6d321161ac6b217b3f1c59a40c4bee7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_snyder, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:14:03 compute-0 podman[81063]: 2025-12-11 09:14:03.901664864 +0000 UTC m=+0.025577455 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:14:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-407136daf5dd5f0aaeb603833e37500141c73170ec2048906bfb09d7af4782ec-merged.mount: Deactivated successfully.
Dec 11 09:14:04 compute-0 podman[81063]: 2025-12-11 09:14:04.031483786 +0000 UTC m=+0.155396347 container remove f4d749d78ded722c870bd498263ff862c6d321161ac6b217b3f1c59a40c4bee7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_snyder, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 11 09:14:04 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec 11 09:14:04 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2337319113' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 11 09:14:04 compute-0 thirsty_bohr[80998]: 
Dec 11 09:14:04 compute-0 thirsty_bohr[80998]: {"fsid":"31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060","health":{"status":"HEALTH_WARN","checks":{"CEPHADM_APPLY_SPEC_FAIL":{"severity":"HEALTH_WARN","summary":{"message":"Failed to apply 2 service(s): mon,mgr","count":2},"muted":false},"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":95,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2025-12-11T09:12:26:191373+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-12-11T09:13:53.054285+0000","services":{}},"progress_events":{"0ed8d95b-98e3-4985-8f0a-734148cfc7f1":{"message":"Updating crash deployment (+1 -> 2) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Dec 11 09:14:04 compute-0 systemd[1]: libpod-conmon-f4d749d78ded722c870bd498263ff862c6d321161ac6b217b3f1c59a40c4bee7.scope: Deactivated successfully.
Dec 11 09:14:04 compute-0 systemd[1]: libpod-4b3e91cb616bbbc244c589926b9654af67cadb4190fe81bbadd014ca1698bbe2.scope: Deactivated successfully.
Dec 11 09:14:04 compute-0 podman[80939]: 2025-12-11 09:14:04.058015741 +0000 UTC m=+0.646225553 container died 4b3e91cb616bbbc244c589926b9654af67cadb4190fe81bbadd014ca1698bbe2 (image=quay.io/ceph/ceph:v19, name=thirsty_bohr, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 11 09:14:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-8efb4298fdb6839f6bcdad6247ae68ffaf6bc836a0a76efa4eeed63a774e4d16-merged.mount: Deactivated successfully.
Dec 11 09:14:04 compute-0 podman[80939]: 2025-12-11 09:14:04.09042261 +0000 UTC m=+0.678632422 container remove 4b3e91cb616bbbc244c589926b9654af67cadb4190fe81bbadd014ca1698bbe2 (image=quay.io/ceph/ceph:v19, name=thirsty_bohr, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 11 09:14:04 compute-0 systemd[1]: libpod-conmon-4b3e91cb616bbbc244c589926b9654af67cadb4190fe81bbadd014ca1698bbe2.scope: Deactivated successfully.
Dec 11 09:14:04 compute-0 sudo[80929]: pam_unix(sudo:session): session closed for user root
Dec 11 09:14:04 compute-0 podman[81118]: 2025-12-11 09:14:04.198378726 +0000 UTC m=+0.046947349 container create 294d61789fb1cffaab484bb6576629f075512b7c0ccf3cb5344e15f180853fa0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_galileo, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:14:04 compute-0 systemd[1]: Started libpod-conmon-294d61789fb1cffaab484bb6576629f075512b7c0ccf3cb5344e15f180853fa0.scope.
Dec 11 09:14:04 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:14:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25b44c0f700d590da96549bf756ddead56bd402ac6dd18d8d40fba91219d05a2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:04 compute-0 podman[81118]: 2025-12-11 09:14:04.178687326 +0000 UTC m=+0.027255969 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:14:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25b44c0f700d590da96549bf756ddead56bd402ac6dd18d8d40fba91219d05a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25b44c0f700d590da96549bf756ddead56bd402ac6dd18d8d40fba91219d05a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25b44c0f700d590da96549bf756ddead56bd402ac6dd18d8d40fba91219d05a2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25b44c0f700d590da96549bf756ddead56bd402ac6dd18d8d40fba91219d05a2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:04 compute-0 podman[81118]: 2025-12-11 09:14:04.287854619 +0000 UTC m=+0.136423262 container init 294d61789fb1cffaab484bb6576629f075512b7c0ccf3cb5344e15f180853fa0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:14:04 compute-0 podman[81118]: 2025-12-11 09:14:04.29521021 +0000 UTC m=+0.143778843 container start 294d61789fb1cffaab484bb6576629f075512b7c0ccf3cb5344e15f180853fa0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_galileo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 11 09:14:04 compute-0 podman[81118]: 2025-12-11 09:14:04.299425393 +0000 UTC m=+0.147994016 container attach 294d61789fb1cffaab484bb6576629f075512b7c0ccf3cb5344e15f180853fa0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_galileo, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:14:04 compute-0 ceph-mon[74426]: pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:14:04 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:04 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:04 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:04 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:04 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 11 09:14:04 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 11 09:14:04 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:14:04 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 11 09:14:04 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:14:04 compute-0 ceph-mon[74426]: from='client.? 192.168.122.100:0/2337319113' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 11 09:14:04 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:14:04 compute-0 boring_galileo[81134]: --> passed data devices: 0 physical, 1 LVM
Dec 11 09:14:04 compute-0 boring_galileo[81134]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 11 09:14:04 compute-0 boring_galileo[81134]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 11 09:14:04 compute-0 boring_galileo[81134]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 865a308b-cdc3-4034-b5eb-feb596b462bf
Dec 11 09:14:05 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "a627ae9c-42f5-4a06-85ec-51173588750e"} v 0)
Dec 11 09:14:05 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/2836891570' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a627ae9c-42f5-4a06-85ec-51173588750e"}]: dispatch
Dec 11 09:14:05 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Dec 11 09:14:05 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 11 09:14:05 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/2836891570' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "a627ae9c-42f5-4a06-85ec-51173588750e"}]': finished
Dec 11 09:14:05 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Dec 11 09:14:05 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Dec 11 09:14:05 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 11 09:14:05 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 11 09:14:05 compute-0 ceph-mgr[74715]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 11 09:14:05 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "865a308b-cdc3-4034-b5eb-feb596b462bf"} v 0)
Dec 11 09:14:05 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3584414900' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "865a308b-cdc3-4034-b5eb-feb596b462bf"}]: dispatch
Dec 11 09:14:05 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Dec 11 09:14:05 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 11 09:14:05 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3584414900' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "865a308b-cdc3-4034-b5eb-feb596b462bf"}]': finished
Dec 11 09:14:05 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Dec 11 09:14:05 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Dec 11 09:14:05 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 11 09:14:05 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 11 09:14:05 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 11 09:14:05 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 11 09:14:05 compute-0 ceph-mgr[74715]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 11 09:14:05 compute-0 ceph-mgr[74715]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 11 09:14:05 compute-0 ceph-mon[74426]: from='client.? 192.168.122.101:0/2836891570' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a627ae9c-42f5-4a06-85ec-51173588750e"}]: dispatch
Dec 11 09:14:05 compute-0 ceph-mon[74426]: from='client.? 192.168.122.101:0/2836891570' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "a627ae9c-42f5-4a06-85ec-51173588750e"}]': finished
Dec 11 09:14:05 compute-0 ceph-mon[74426]: osdmap e4: 1 total, 0 up, 1 in
Dec 11 09:14:05 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 11 09:14:05 compute-0 ceph-mon[74426]: from='client.? 192.168.122.100:0/3584414900' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "865a308b-cdc3-4034-b5eb-feb596b462bf"}]: dispatch
Dec 11 09:14:05 compute-0 ceph-mon[74426]: from='client.? 192.168.122.100:0/3584414900' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "865a308b-cdc3-4034-b5eb-feb596b462bf"}]': finished
Dec 11 09:14:05 compute-0 ceph-mon[74426]: osdmap e5: 2 total, 0 up, 2 in
Dec 11 09:14:05 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 11 09:14:05 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 11 09:14:05 compute-0 boring_galileo[81134]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Dec 11 09:14:05 compute-0 boring_galileo[81134]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Dec 11 09:14:05 compute-0 lvm[81196]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 11 09:14:05 compute-0 lvm[81196]: VG ceph_vg0 finished
Dec 11 09:14:05 compute-0 boring_galileo[81134]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 11 09:14:05 compute-0 boring_galileo[81134]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Dec 11 09:14:05 compute-0 boring_galileo[81134]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Dec 11 09:14:05 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Dec 11 09:14:05 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3384999788' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Dec 11 09:14:05 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Dec 11 09:14:05 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/752385124' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Dec 11 09:14:05 compute-0 boring_galileo[81134]:  stderr: got monmap epoch 1
Dec 11 09:14:05 compute-0 boring_galileo[81134]: --> Creating keyring file for osd.1
Dec 11 09:14:05 compute-0 boring_galileo[81134]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Dec 11 09:14:05 compute-0 boring_galileo[81134]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Dec 11 09:14:05 compute-0 boring_galileo[81134]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 865a308b-cdc3-4034-b5eb-feb596b462bf --setuser ceph --setgroup ceph
Dec 11 09:14:06 compute-0 ceph-mgr[74715]: [progress INFO root] Writing back 2 completed events
Dec 11 09:14:06 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 11 09:14:06 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:06 compute-0 ceph-mon[74426]: pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:14:06 compute-0 ceph-mon[74426]: from='client.? 192.168.122.101:0/3384999788' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Dec 11 09:14:06 compute-0 ceph-mon[74426]: from='client.? 192.168.122.100:0/752385124' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Dec 11 09:14:06 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:06 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:14:07 compute-0 ceph-mon[74426]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Dec 11 09:14:07 compute-0 ceph-mon[74426]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Dec 11 09:14:08 compute-0 ceph-mon[74426]: pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:14:08 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:14:08 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:14:09 compute-0 boring_galileo[81134]:  stderr: 2025-12-11T09:14:05.947+0000 7f3d301d4740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) No valid bdev label found
Dec 11 09:14:09 compute-0 boring_galileo[81134]:  stderr: 2025-12-11T09:14:06.213+0000 7f3d301d4740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Dec 11 09:14:09 compute-0 boring_galileo[81134]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Dec 11 09:14:09 compute-0 boring_galileo[81134]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 11 09:14:09 compute-0 boring_galileo[81134]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Dec 11 09:14:09 compute-0 boring_galileo[81134]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Dec 11 09:14:09 compute-0 boring_galileo[81134]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Dec 11 09:14:09 compute-0 boring_galileo[81134]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 11 09:14:09 compute-0 boring_galileo[81134]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 11 09:14:09 compute-0 boring_galileo[81134]: --> ceph-volume lvm activate successful for osd ID: 1
Dec 11 09:14:09 compute-0 boring_galileo[81134]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Dec 11 09:14:09 compute-0 systemd[1]: libpod-294d61789fb1cffaab484bb6576629f075512b7c0ccf3cb5344e15f180853fa0.scope: Deactivated successfully.
Dec 11 09:14:09 compute-0 systemd[1]: libpod-294d61789fb1cffaab484bb6576629f075512b7c0ccf3cb5344e15f180853fa0.scope: Consumed 2.596s CPU time.
Dec 11 09:14:09 compute-0 podman[82101]: 2025-12-11 09:14:09.540471437 +0000 UTC m=+0.028255820 container died 294d61789fb1cffaab484bb6576629f075512b7c0ccf3cb5344e15f180853fa0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_galileo, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:14:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-25b44c0f700d590da96549bf756ddead56bd402ac6dd18d8d40fba91219d05a2-merged.mount: Deactivated successfully.
Dec 11 09:14:09 compute-0 podman[82101]: 2025-12-11 09:14:09.58384582 +0000 UTC m=+0.071630173 container remove 294d61789fb1cffaab484bb6576629f075512b7c0ccf3cb5344e15f180853fa0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_galileo, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 11 09:14:09 compute-0 systemd[1]: libpod-conmon-294d61789fb1cffaab484bb6576629f075512b7c0ccf3cb5344e15f180853fa0.scope: Deactivated successfully.
Dec 11 09:14:09 compute-0 sudo[80969]: pam_unix(sudo:session): session closed for user root
Dec 11 09:14:09 compute-0 sudo[82116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:14:09 compute-0 sudo[82116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:14:09 compute-0 sudo[82116]: pam_unix(sudo:session): session closed for user root
Dec 11 09:14:09 compute-0 sudo[82141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -- lvm list --format json
Dec 11 09:14:09 compute-0 sudo[82141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:14:10 compute-0 podman[82205]: 2025-12-11 09:14:10.173283256 +0000 UTC m=+0.046191783 container create bc6b0cf4dfa0d0797e88bffeaea1a005ebef55d24869287289efc20c45e600ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_lederberg, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default)
Dec 11 09:14:10 compute-0 systemd[1]: Started libpod-conmon-bc6b0cf4dfa0d0797e88bffeaea1a005ebef55d24869287289efc20c45e600ff.scope.
Dec 11 09:14:10 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:14:10 compute-0 podman[82205]: 2025-12-11 09:14:10.243620688 +0000 UTC m=+0.116529245 container init bc6b0cf4dfa0d0797e88bffeaea1a005ebef55d24869287289efc20c45e600ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:14:10 compute-0 podman[82205]: 2025-12-11 09:14:10.155682344 +0000 UTC m=+0.028590901 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:14:10 compute-0 podman[82205]: 2025-12-11 09:14:10.253987145 +0000 UTC m=+0.126895682 container start bc6b0cf4dfa0d0797e88bffeaea1a005ebef55d24869287289efc20c45e600ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_lederberg, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default)
Dec 11 09:14:10 compute-0 podman[82205]: 2025-12-11 09:14:10.257133384 +0000 UTC m=+0.130041941 container attach bc6b0cf4dfa0d0797e88bffeaea1a005ebef55d24869287289efc20c45e600ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_lederberg, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 11 09:14:10 compute-0 xenodochial_lederberg[82221]: 167 167
Dec 11 09:14:10 compute-0 systemd[1]: libpod-bc6b0cf4dfa0d0797e88bffeaea1a005ebef55d24869287289efc20c45e600ff.scope: Deactivated successfully.
Dec 11 09:14:10 compute-0 podman[82205]: 2025-12-11 09:14:10.259059794 +0000 UTC m=+0.131968331 container died bc6b0cf4dfa0d0797e88bffeaea1a005ebef55d24869287289efc20c45e600ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_lederberg, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 11 09:14:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-71bd101a70b3ccf021540528a666b94aec1704e382cba9cdead2acaa7b322045-merged.mount: Deactivated successfully.
Dec 11 09:14:10 compute-0 podman[82205]: 2025-12-11 09:14:10.295377916 +0000 UTC m=+0.168286453 container remove bc6b0cf4dfa0d0797e88bffeaea1a005ebef55d24869287289efc20c45e600ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_lederberg, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:14:10 compute-0 systemd[1]: libpod-conmon-bc6b0cf4dfa0d0797e88bffeaea1a005ebef55d24869287289efc20c45e600ff.scope: Deactivated successfully.
Dec 11 09:14:10 compute-0 podman[82244]: 2025-12-11 09:14:10.455058278 +0000 UTC m=+0.047834115 container create fb7b959cbd898b948ec2ac3b3f6f723cc0d8159e1a9c7d927e9d2d911d5b46ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_hermann, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec 11 09:14:10 compute-0 ceph-mon[74426]: pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:14:10 compute-0 systemd[1]: Started libpod-conmon-fb7b959cbd898b948ec2ac3b3f6f723cc0d8159e1a9c7d927e9d2d911d5b46ad.scope.
Dec 11 09:14:10 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:14:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95a234980a114e566af8017bff2b1a391897204ef4744f8db2f9ee5783386b64/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95a234980a114e566af8017bff2b1a391897204ef4744f8db2f9ee5783386b64/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:10 compute-0 podman[82244]: 2025-12-11 09:14:10.435203753 +0000 UTC m=+0.027979620 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:14:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95a234980a114e566af8017bff2b1a391897204ef4744f8db2f9ee5783386b64/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95a234980a114e566af8017bff2b1a391897204ef4744f8db2f9ee5783386b64/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:10 compute-0 podman[82244]: 2025-12-11 09:14:10.543024124 +0000 UTC m=+0.135799971 container init fb7b959cbd898b948ec2ac3b3f6f723cc0d8159e1a9c7d927e9d2d911d5b46ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_hermann, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Dec 11 09:14:10 compute-0 podman[82244]: 2025-12-11 09:14:10.55050879 +0000 UTC m=+0.143284627 container start fb7b959cbd898b948ec2ac3b3f6f723cc0d8159e1a9c7d927e9d2d911d5b46ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 11 09:14:10 compute-0 podman[82244]: 2025-12-11 09:14:10.554205895 +0000 UTC m=+0.146981762 container attach fb7b959cbd898b948ec2ac3b3f6f723cc0d8159e1a9c7d927e9d2d911d5b46ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 11 09:14:10 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:14:10 compute-0 friendly_hermann[82260]: {
Dec 11 09:14:10 compute-0 friendly_hermann[82260]:     "1": [
Dec 11 09:14:10 compute-0 friendly_hermann[82260]:         {
Dec 11 09:14:10 compute-0 friendly_hermann[82260]:             "devices": [
Dec 11 09:14:10 compute-0 friendly_hermann[82260]:                 "/dev/loop3"
Dec 11 09:14:10 compute-0 friendly_hermann[82260]:             ],
Dec 11 09:14:10 compute-0 friendly_hermann[82260]:             "lv_name": "ceph_lv0",
Dec 11 09:14:10 compute-0 friendly_hermann[82260]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 11 09:14:10 compute-0 friendly_hermann[82260]:             "lv_size": "21470642176",
Dec 11 09:14:10 compute-0 friendly_hermann[82260]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TIjPv6-JN4c-U1Hf-F3k1-2mgj-CwGi-pbjIHh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=865a308b-cdc3-4034-b5eb-feb596b462bf,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 11 09:14:10 compute-0 friendly_hermann[82260]:             "lv_uuid": "TIjPv6-JN4c-U1Hf-F3k1-2mgj-CwGi-pbjIHh",
Dec 11 09:14:10 compute-0 friendly_hermann[82260]:             "name": "ceph_lv0",
Dec 11 09:14:10 compute-0 friendly_hermann[82260]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 11 09:14:10 compute-0 friendly_hermann[82260]:             "tags": {
Dec 11 09:14:10 compute-0 friendly_hermann[82260]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 11 09:14:10 compute-0 friendly_hermann[82260]:                 "ceph.block_uuid": "TIjPv6-JN4c-U1Hf-F3k1-2mgj-CwGi-pbjIHh",
Dec 11 09:14:10 compute-0 friendly_hermann[82260]:                 "ceph.cephx_lockbox_secret": "",
Dec 11 09:14:10 compute-0 friendly_hermann[82260]:                 "ceph.cluster_fsid": "31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060",
Dec 11 09:14:10 compute-0 friendly_hermann[82260]:                 "ceph.cluster_name": "ceph",
Dec 11 09:14:10 compute-0 friendly_hermann[82260]:                 "ceph.crush_device_class": "",
Dec 11 09:14:10 compute-0 friendly_hermann[82260]:                 "ceph.encrypted": "0",
Dec 11 09:14:10 compute-0 friendly_hermann[82260]:                 "ceph.osd_fsid": "865a308b-cdc3-4034-b5eb-feb596b462bf",
Dec 11 09:14:10 compute-0 friendly_hermann[82260]:                 "ceph.osd_id": "1",
Dec 11 09:14:10 compute-0 friendly_hermann[82260]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 11 09:14:10 compute-0 friendly_hermann[82260]:                 "ceph.type": "block",
Dec 11 09:14:10 compute-0 friendly_hermann[82260]:                 "ceph.vdo": "0",
Dec 11 09:14:10 compute-0 friendly_hermann[82260]:                 "ceph.with_tpm": "0"
Dec 11 09:14:10 compute-0 friendly_hermann[82260]:             },
Dec 11 09:14:10 compute-0 friendly_hermann[82260]:             "type": "block",
Dec 11 09:14:10 compute-0 friendly_hermann[82260]:             "vg_name": "ceph_vg0"
Dec 11 09:14:10 compute-0 friendly_hermann[82260]:         }
Dec 11 09:14:10 compute-0 friendly_hermann[82260]:     ]
Dec 11 09:14:10 compute-0 friendly_hermann[82260]: }
Dec 11 09:14:10 compute-0 systemd[1]: libpod-fb7b959cbd898b948ec2ac3b3f6f723cc0d8159e1a9c7d927e9d2d911d5b46ad.scope: Deactivated successfully.
Dec 11 09:14:10 compute-0 podman[82244]: 2025-12-11 09:14:10.940726261 +0000 UTC m=+0.533502098 container died fb7b959cbd898b948ec2ac3b3f6f723cc0d8159e1a9c7d927e9d2d911d5b46ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_hermann, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Dec 11 09:14:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-95a234980a114e566af8017bff2b1a391897204ef4744f8db2f9ee5783386b64-merged.mount: Deactivated successfully.
Dec 11 09:14:10 compute-0 podman[82244]: 2025-12-11 09:14:10.981897646 +0000 UTC m=+0.574673483 container remove fb7b959cbd898b948ec2ac3b3f6f723cc0d8159e1a9c7d927e9d2d911d5b46ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_hermann, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:14:11 compute-0 systemd[1]: libpod-conmon-fb7b959cbd898b948ec2ac3b3f6f723cc0d8159e1a9c7d927e9d2d911d5b46ad.scope: Deactivated successfully.
Dec 11 09:14:11 compute-0 sudo[82141]: pam_unix(sudo:session): session closed for user root
Dec 11 09:14:11 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Dec 11 09:14:11 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Dec 11 09:14:11 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 11 09:14:11 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:14:11 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Dec 11 09:14:11 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Dec 11 09:14:11 compute-0 sudo[82280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:14:11 compute-0 sudo[82280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:14:11 compute-0 sudo[82280]: pam_unix(sudo:session): session closed for user root
Dec 11 09:14:11 compute-0 sudo[82305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:14:11 compute-0 sudo[82305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:14:11 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Dec 11 09:14:11 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:14:11 compute-0 podman[82370]: 2025-12-11 09:14:11.572308474 +0000 UTC m=+0.041501716 container create ae3ea7c2fcb3459ceed33fdd29b24fbc9b24913d759b99f9d94e526951dcd72b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:14:11 compute-0 systemd[1]: Started libpod-conmon-ae3ea7c2fcb3459ceed33fdd29b24fbc9b24913d759b99f9d94e526951dcd72b.scope.
Dec 11 09:14:11 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:14:11 compute-0 podman[82370]: 2025-12-11 09:14:11.638573397 +0000 UTC m=+0.107766659 container init ae3ea7c2fcb3459ceed33fdd29b24fbc9b24913d759b99f9d94e526951dcd72b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_williams, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:14:11 compute-0 podman[82370]: 2025-12-11 09:14:11.647651183 +0000 UTC m=+0.116844425 container start ae3ea7c2fcb3459ceed33fdd29b24fbc9b24913d759b99f9d94e526951dcd72b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_williams, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:14:11 compute-0 podman[82370]: 2025-12-11 09:14:11.553809721 +0000 UTC m=+0.023002993 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:14:11 compute-0 suspicious_williams[82386]: 167 167
Dec 11 09:14:11 compute-0 systemd[1]: libpod-ae3ea7c2fcb3459ceed33fdd29b24fbc9b24913d759b99f9d94e526951dcd72b.scope: Deactivated successfully.
Dec 11 09:14:11 compute-0 podman[82370]: 2025-12-11 09:14:11.653429235 +0000 UTC m=+0.122622507 container attach ae3ea7c2fcb3459ceed33fdd29b24fbc9b24913d759b99f9d94e526951dcd72b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_williams, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:14:11 compute-0 conmon[82386]: conmon ae3ea7c2fcb3459ceed3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ae3ea7c2fcb3459ceed33fdd29b24fbc9b24913d759b99f9d94e526951dcd72b.scope/container/memory.events
Dec 11 09:14:11 compute-0 podman[82370]: 2025-12-11 09:14:11.654014014 +0000 UTC m=+0.123207246 container died ae3ea7c2fcb3459ceed33fdd29b24fbc9b24913d759b99f9d94e526951dcd72b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_williams, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:14:11 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Dec 11 09:14:11 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Dec 11 09:14:11 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 11 09:14:11 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:14:11 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-1
Dec 11 09:14:11 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-1
Dec 11 09:14:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c01528a532c4888e420180d6068543d39932253cc0ec239f058e3b088624785-merged.mount: Deactivated successfully.
Dec 11 09:14:11 compute-0 podman[82370]: 2025-12-11 09:14:11.697127779 +0000 UTC m=+0.166321011 container remove ae3ea7c2fcb3459ceed33fdd29b24fbc9b24913d759b99f9d94e526951dcd72b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_williams, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:14:11 compute-0 systemd[1]: libpod-conmon-ae3ea7c2fcb3459ceed33fdd29b24fbc9b24913d759b99f9d94e526951dcd72b.scope: Deactivated successfully.
Dec 11 09:14:11 compute-0 podman[82415]: 2025-12-11 09:14:11.936023702 +0000 UTC m=+0.041637500 container create c8470c16c734c9d8f92b9f687866a997d1ca7caf2baf3802bf0b5e99322d4f89 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-osd-1-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:14:12 compute-0 podman[82415]: 2025-12-11 09:14:11.92070203 +0000 UTC m=+0.026315838 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:14:12 compute-0 systemd[1]: Started libpod-conmon-c8470c16c734c9d8f92b9f687866a997d1ca7caf2baf3802bf0b5e99322d4f89.scope.
Dec 11 09:14:12 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:14:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f815b3b159f95f071639b4f7b3fd185bec8b901e54135696c77af4812b14a4c0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f815b3b159f95f071639b4f7b3fd185bec8b901e54135696c77af4812b14a4c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f815b3b159f95f071639b4f7b3fd185bec8b901e54135696c77af4812b14a4c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f815b3b159f95f071639b4f7b3fd185bec8b901e54135696c77af4812b14a4c0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f815b3b159f95f071639b4f7b3fd185bec8b901e54135696c77af4812b14a4c0/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:12 compute-0 podman[82415]: 2025-12-11 09:14:12.374692347 +0000 UTC m=+0.480306155 container init c8470c16c734c9d8f92b9f687866a997d1ca7caf2baf3802bf0b5e99322d4f89 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-osd-1-activate-test, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Dec 11 09:14:12 compute-0 podman[82415]: 2025-12-11 09:14:12.383263516 +0000 UTC m=+0.488877314 container start c8470c16c734c9d8f92b9f687866a997d1ca7caf2baf3802bf0b5e99322d4f89 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-osd-1-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:14:12 compute-0 podman[82415]: 2025-12-11 09:14:12.387222641 +0000 UTC m=+0.492836459 container attach c8470c16c734c9d8f92b9f687866a997d1ca7caf2baf3802bf0b5e99322d4f89 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-osd-1-activate-test, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 11 09:14:12 compute-0 ceph-mon[74426]: pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:14:12 compute-0 ceph-mon[74426]: Deploying daemon osd.1 on compute-0
Dec 11 09:14:12 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Dec 11 09:14:12 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:14:12 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-osd-1-activate-test[82431]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Dec 11 09:14:12 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-osd-1-activate-test[82431]:                             [--no-systemd] [--no-tmpfs]
Dec 11 09:14:12 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-osd-1-activate-test[82431]: ceph-volume activate: error: unrecognized arguments: --bad-option
Dec 11 09:14:12 compute-0 systemd[1]: libpod-c8470c16c734c9d8f92b9f687866a997d1ca7caf2baf3802bf0b5e99322d4f89.scope: Deactivated successfully.
Dec 11 09:14:12 compute-0 podman[82415]: 2025-12-11 09:14:12.658066119 +0000 UTC m=+0.763679917 container died c8470c16c734c9d8f92b9f687866a997d1ca7caf2baf3802bf0b5e99322d4f89 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-osd-1-activate-test, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:14:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-f815b3b159f95f071639b4f7b3fd185bec8b901e54135696c77af4812b14a4c0-merged.mount: Deactivated successfully.
Dec 11 09:14:12 compute-0 podman[82415]: 2025-12-11 09:14:12.701863326 +0000 UTC m=+0.807477124 container remove c8470c16c734c9d8f92b9f687866a997d1ca7caf2baf3802bf0b5e99322d4f89 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-osd-1-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 11 09:14:12 compute-0 systemd[1]: libpod-conmon-c8470c16c734c9d8f92b9f687866a997d1ca7caf2baf3802bf0b5e99322d4f89.scope: Deactivated successfully.
Dec 11 09:14:12 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:14:12 compute-0 systemd[1]: Reloading.
Dec 11 09:14:13 compute-0 systemd-rc-local-generator[82493]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:14:13 compute-0 systemd-sysv-generator[82497]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:14:13 compute-0 systemd[1]: Reloading.
Dec 11 09:14:13 compute-0 systemd-rc-local-generator[82534]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:14:13 compute-0 systemd-sysv-generator[82538]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:14:13 compute-0 systemd[1]: Starting Ceph osd.1 for 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060...
Dec 11 09:14:13 compute-0 ceph-mon[74426]: Deploying daemon osd.0 on compute-1
Dec 11 09:14:13 compute-0 podman[82592]: 2025-12-11 09:14:13.741584354 +0000 UTC m=+0.024004387 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:14:13 compute-0 podman[82592]: 2025-12-11 09:14:13.849438075 +0000 UTC m=+0.131858068 container create 3221fae59742ef48165af3d4672d5ca2f39c94197b56eb7a3612fad2bb434b7b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-osd-1-activate, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 11 09:14:13 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:14:14 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:14:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afb4c90dcf5488095bdd1cdeab7e964582d3f12de29b453f1f73ad3b584a3d9d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afb4c90dcf5488095bdd1cdeab7e964582d3f12de29b453f1f73ad3b584a3d9d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afb4c90dcf5488095bdd1cdeab7e964582d3f12de29b453f1f73ad3b584a3d9d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afb4c90dcf5488095bdd1cdeab7e964582d3f12de29b453f1f73ad3b584a3d9d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afb4c90dcf5488095bdd1cdeab7e964582d3f12de29b453f1f73ad3b584a3d9d/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:14 compute-0 podman[82592]: 2025-12-11 09:14:14.330090211 +0000 UTC m=+0.612510234 container init 3221fae59742ef48165af3d4672d5ca2f39c94197b56eb7a3612fad2bb434b7b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-osd-1-activate, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:14:14 compute-0 podman[82592]: 2025-12-11 09:14:14.336819462 +0000 UTC m=+0.619239465 container start 3221fae59742ef48165af3d4672d5ca2f39c94197b56eb7a3612fad2bb434b7b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-osd-1-activate, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:14:14 compute-0 podman[82592]: 2025-12-11 09:14:14.440806772 +0000 UTC m=+0.723226785 container attach 3221fae59742ef48165af3d4672d5ca2f39c94197b56eb7a3612fad2bb434b7b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-osd-1-activate, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec 11 09:14:14 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-osd-1-activate[82606]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 11 09:14:14 compute-0 bash[82592]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 11 09:14:14 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-osd-1-activate[82606]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 11 09:14:14 compute-0 bash[82592]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 11 09:14:14 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:14:14 compute-0 ceph-mon[74426]: pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:14:15 compute-0 lvm[82687]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 11 09:14:15 compute-0 lvm[82687]: VG ceph_vg0 finished
Dec 11 09:14:15 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-osd-1-activate[82606]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec 11 09:14:15 compute-0 bash[82592]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec 11 09:14:15 compute-0 bash[82592]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 11 09:14:15 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-osd-1-activate[82606]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 11 09:14:15 compute-0 lvm[82691]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 11 09:14:15 compute-0 lvm[82691]: VG ceph_vg0 finished
Dec 11 09:14:15 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-osd-1-activate[82606]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 11 09:14:15 compute-0 bash[82592]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 11 09:14:15 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-osd-1-activate[82606]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 11 09:14:15 compute-0 bash[82592]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 11 09:14:15 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-osd-1-activate[82606]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Dec 11 09:14:15 compute-0 bash[82592]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Dec 11 09:14:15 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-osd-1-activate[82606]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Dec 11 09:14:15 compute-0 bash[82592]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Dec 11 09:14:15 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-osd-1-activate[82606]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Dec 11 09:14:15 compute-0 bash[82592]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Dec 11 09:14:15 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-osd-1-activate[82606]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 11 09:14:15 compute-0 bash[82592]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 11 09:14:15 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-osd-1-activate[82606]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 11 09:14:15 compute-0 bash[82592]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 11 09:14:15 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-osd-1-activate[82606]: --> ceph-volume lvm activate successful for osd ID: 1
Dec 11 09:14:15 compute-0 bash[82592]: --> ceph-volume lvm activate successful for osd ID: 1
Dec 11 09:14:15 compute-0 systemd[1]: libpod-3221fae59742ef48165af3d4672d5ca2f39c94197b56eb7a3612fad2bb434b7b.scope: Deactivated successfully.
Dec 11 09:14:15 compute-0 systemd[1]: libpod-3221fae59742ef48165af3d4672d5ca2f39c94197b56eb7a3612fad2bb434b7b.scope: Consumed 1.512s CPU time.
Dec 11 09:14:15 compute-0 podman[82592]: 2025-12-11 09:14:15.636484595 +0000 UTC m=+1.918904618 container died 3221fae59742ef48165af3d4672d5ca2f39c94197b56eb7a3612fad2bb434b7b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-osd-1-activate, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 11 09:14:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-afb4c90dcf5488095bdd1cdeab7e964582d3f12de29b453f1f73ad3b584a3d9d-merged.mount: Deactivated successfully.
Dec 11 09:14:15 compute-0 podman[82592]: 2025-12-11 09:14:15.687722597 +0000 UTC m=+1.970142590 container remove 3221fae59742ef48165af3d4672d5ca2f39c94197b56eb7a3612fad2bb434b7b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-osd-1-activate, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:14:15 compute-0 podman[82839]: 2025-12-11 09:14:15.922020875 +0000 UTC m=+0.045400219 container create df7a11d056718a26ce1ee47cf66c49da6b3921af42053f201c68fa0c18b7b9ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-osd-1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Dec 11 09:14:15 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 11 09:14:15 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:15 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 11 09:14:15 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6da0507ec51f851c7ccf37601f26c0763586a6e4883596e2c44e44af1edfc80c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6da0507ec51f851c7ccf37601f26c0763586a6e4883596e2c44e44af1edfc80c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6da0507ec51f851c7ccf37601f26c0763586a6e4883596e2c44e44af1edfc80c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6da0507ec51f851c7ccf37601f26c0763586a6e4883596e2c44e44af1edfc80c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6da0507ec51f851c7ccf37601f26c0763586a6e4883596e2c44e44af1edfc80c/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:15 compute-0 ceph-mon[74426]: pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:14:15 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:15 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:15 compute-0 podman[82839]: 2025-12-11 09:14:15.990973043 +0000 UTC m=+0.114352417 container init df7a11d056718a26ce1ee47cf66c49da6b3921af42053f201c68fa0c18b7b9ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-osd-1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 11 09:14:15 compute-0 podman[82839]: 2025-12-11 09:14:15.997134997 +0000 UTC m=+0.120514331 container start df7a11d056718a26ce1ee47cf66c49da6b3921af42053f201c68fa0c18b7b9ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-osd-1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:14:15 compute-0 podman[82839]: 2025-12-11 09:14:15.902273424 +0000 UTC m=+0.025652788 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:14:16 compute-0 bash[82839]: df7a11d056718a26ce1ee47cf66c49da6b3921af42053f201c68fa0c18b7b9ad
Dec 11 09:14:16 compute-0 systemd[1]: Started Ceph osd.1 for 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060.
Dec 11 09:14:16 compute-0 ceph-osd[82859]: set uid:gid to 167:167 (ceph:ceph)
Dec 11 09:14:16 compute-0 ceph-osd[82859]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-osd, pid 2
Dec 11 09:14:16 compute-0 ceph-osd[82859]: pidfile_write: ignore empty --pid-file
Dec 11 09:14:16 compute-0 ceph-osd[82859]: bdev(0x5636657f7800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 11 09:14:16 compute-0 ceph-osd[82859]: bdev(0x5636657f7800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 11 09:14:16 compute-0 ceph-osd[82859]: bdev(0x5636657f7800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 11 09:14:16 compute-0 ceph-osd[82859]: bdev(0x5636657f7800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 11 09:14:16 compute-0 ceph-osd[82859]: bdev(0x5636657f7800 /var/lib/ceph/osd/ceph-1/block) close
Dec 11 09:14:16 compute-0 sudo[82305]: pam_unix(sudo:session): session closed for user root
Dec 11 09:14:16 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 11 09:14:16 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:16 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 11 09:14:16 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:16 compute-0 sudo[82871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:14:16 compute-0 sudo[82871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:14:16 compute-0 sudo[82871]: pam_unix(sudo:session): session closed for user root
Dec 11 09:14:16 compute-0 sudo[82896]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -- raw list --format json
Dec 11 09:14:16 compute-0 sudo[82896]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:14:16 compute-0 ceph-osd[82859]: bdev(0x5636657f7800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 11 09:14:16 compute-0 ceph-osd[82859]: bdev(0x5636657f7800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 11 09:14:16 compute-0 ceph-osd[82859]: bdev(0x5636657f7800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 11 09:14:16 compute-0 ceph-osd[82859]: bdev(0x5636657f7800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 11 09:14:16 compute-0 ceph-osd[82859]: bdev(0x5636657f7800 /var/lib/ceph/osd/ceph-1/block) close
Dec 11 09:14:16 compute-0 ceph-osd[82859]: bdev(0x5636657f7800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 11 09:14:16 compute-0 ceph-osd[82859]: bdev(0x5636657f7800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 11 09:14:16 compute-0 ceph-osd[82859]: bdev(0x5636657f7800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 11 09:14:16 compute-0 ceph-osd[82859]: bdev(0x5636657f7800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 11 09:14:16 compute-0 ceph-osd[82859]: bdev(0x5636657f7800 /var/lib/ceph/osd/ceph-1/block) close
Dec 11 09:14:16 compute-0 ceph-osd[82859]: bdev(0x5636657f7800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 11 09:14:16 compute-0 ceph-osd[82859]: bdev(0x5636657f7800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 11 09:14:16 compute-0 ceph-osd[82859]: bdev(0x5636657f7800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 11 09:14:16 compute-0 ceph-osd[82859]: bdev(0x5636657f7800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 11 09:14:16 compute-0 ceph-osd[82859]: bdev(0x5636657f7800 /var/lib/ceph/osd/ceph-1/block) close
Dec 11 09:14:16 compute-0 ceph-osd[82859]: bdev(0x5636657f7800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 11 09:14:16 compute-0 ceph-osd[82859]: bdev(0x5636657f7800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 11 09:14:16 compute-0 ceph-osd[82859]: bdev(0x5636657f7800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 11 09:14:16 compute-0 ceph-osd[82859]: bdev(0x5636657f7800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 11 09:14:16 compute-0 ceph-osd[82859]: bdev(0x5636657f7800 /var/lib/ceph/osd/ceph-1/block) close
Dec 11 09:14:16 compute-0 ceph-osd[82859]: bdev(0x5636657f7800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 11 09:14:16 compute-0 ceph-osd[82859]: bdev(0x5636657f7800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 11 09:14:16 compute-0 ceph-osd[82859]: bdev(0x5636657f7800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 11 09:14:16 compute-0 ceph-osd[82859]: bdev(0x5636657f7800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 11 09:14:16 compute-0 ceph-osd[82859]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 11 09:14:16 compute-0 ceph-osd[82859]: bdev(0x563666621400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 11 09:14:16 compute-0 ceph-osd[82859]: bdev(0x563666621400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 11 09:14:16 compute-0 ceph-osd[82859]: bdev(0x563666621400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 11 09:14:16 compute-0 ceph-osd[82859]: bdev(0x563666621400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 11 09:14:16 compute-0 ceph-osd[82859]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Dec 11 09:14:16 compute-0 ceph-osd[82859]: bdev(0x563666621400 /var/lib/ceph/osd/ceph-1/block) close
Dec 11 09:14:16 compute-0 podman[82972]: 2025-12-11 09:14:16.673583531 +0000 UTC m=+0.045632817 container create c07072be735f88a7d3c42f6099b3e9977fedda5d1ba0e5386a8c949ec2446b09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:14:16 compute-0 ceph-osd[82859]: bdev(0x5636657f7800 /var/lib/ceph/osd/ceph-1/block) close
Dec 11 09:14:16 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:14:16 compute-0 systemd[1]: Started libpod-conmon-c07072be735f88a7d3c42f6099b3e9977fedda5d1ba0e5386a8c949ec2446b09.scope.
Dec 11 09:14:16 compute-0 podman[82972]: 2025-12-11 09:14:16.65452783 +0000 UTC m=+0.026577146 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:14:16 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:14:16 compute-0 podman[82972]: 2025-12-11 09:14:16.773399249 +0000 UTC m=+0.145448945 container init c07072be735f88a7d3c42f6099b3e9977fedda5d1ba0e5386a8c949ec2446b09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:14:16 compute-0 podman[82972]: 2025-12-11 09:14:16.781671859 +0000 UTC m=+0.153721145 container start c07072be735f88a7d3c42f6099b3e9977fedda5d1ba0e5386a8c949ec2446b09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_matsumoto, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 11 09:14:16 compute-0 podman[82972]: 2025-12-11 09:14:16.785430068 +0000 UTC m=+0.157479444 container attach c07072be735f88a7d3c42f6099b3e9977fedda5d1ba0e5386a8c949ec2446b09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_matsumoto, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:14:16 compute-0 peaceful_matsumoto[82991]: 167 167
Dec 11 09:14:16 compute-0 systemd[1]: libpod-c07072be735f88a7d3c42f6099b3e9977fedda5d1ba0e5386a8c949ec2446b09.scope: Deactivated successfully.
Dec 11 09:14:16 compute-0 conmon[82991]: conmon c07072be735f88a7d3c4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c07072be735f88a7d3c42f6099b3e9977fedda5d1ba0e5386a8c949ec2446b09.scope/container/memory.events
Dec 11 09:14:16 compute-0 podman[82972]: 2025-12-11 09:14:16.792639074 +0000 UTC m=+0.164688360 container died c07072be735f88a7d3c42f6099b3e9977fedda5d1ba0e5386a8c949ec2446b09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:14:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-77b0d483cc23afc07b7210a20fd4972aa8076c9fb0960444a1234969dd8f35a2-merged.mount: Deactivated successfully.
Dec 11 09:14:16 compute-0 podman[82972]: 2025-12-11 09:14:16.841296235 +0000 UTC m=+0.213345521 container remove c07072be735f88a7d3c42f6099b3e9977fedda5d1ba0e5386a8c949ec2446b09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_matsumoto, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 11 09:14:16 compute-0 systemd[1]: libpod-conmon-c07072be735f88a7d3c42f6099b3e9977fedda5d1ba0e5386a8c949ec2446b09.scope: Deactivated successfully.
Dec 11 09:14:16 compute-0 ceph-osd[82859]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Dec 11 09:14:16 compute-0 ceph-osd[82859]: load: jerasure load: lrc 
Dec 11 09:14:16 compute-0 ceph-osd[82859]: bdev(0x563666621c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 11 09:14:16 compute-0 ceph-osd[82859]: bdev(0x563666621c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 11 09:14:16 compute-0 ceph-osd[82859]: bdev(0x563666621c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 11 09:14:16 compute-0 ceph-osd[82859]: bdev(0x563666621c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 11 09:14:16 compute-0 ceph-osd[82859]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 11 09:14:16 compute-0 ceph-osd[82859]: bdev(0x563666621c00 /var/lib/ceph/osd/ceph-1/block) close
Dec 11 09:14:17 compute-0 podman[83015]: 2025-12-11 09:14:17.041899313 +0000 UTC m=+0.061381111 container create 58160c5cb5900da147df30f5774f12fb632d434b94571b9aee209772ba826ffb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:14:17 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:17 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:17 compute-0 systemd[1]: Started libpod-conmon-58160c5cb5900da147df30f5774f12fb632d434b94571b9aee209772ba826ffb.scope.
Dec 11 09:14:17 compute-0 podman[83015]: 2025-12-11 09:14:17.019104506 +0000 UTC m=+0.038586344 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:14:17 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:14:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b78016bd2eecc3a1170a979e28c1e2b95c9a3d1d94bdf62711053ea3ab0446c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b78016bd2eecc3a1170a979e28c1e2b95c9a3d1d94bdf62711053ea3ab0446c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b78016bd2eecc3a1170a979e28c1e2b95c9a3d1d94bdf62711053ea3ab0446c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b78016bd2eecc3a1170a979e28c1e2b95c9a3d1d94bdf62711053ea3ab0446c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:17 compute-0 podman[83015]: 2025-12-11 09:14:17.143287182 +0000 UTC m=+0.162769010 container init 58160c5cb5900da147df30f5774f12fb632d434b94571b9aee209772ba826ffb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 11 09:14:17 compute-0 podman[83015]: 2025-12-11 09:14:17.152477861 +0000 UTC m=+0.171959679 container start 58160c5cb5900da147df30f5774f12fb632d434b94571b9aee209772ba826ffb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_jang, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 11 09:14:17 compute-0 podman[83015]: 2025-12-11 09:14:17.156822717 +0000 UTC m=+0.176304525 container attach 58160c5cb5900da147df30f5774f12fb632d434b94571b9aee209772ba826ffb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_jang, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:14:17 compute-0 ceph-osd[82859]: bdev(0x563666621c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 11 09:14:17 compute-0 ceph-osd[82859]: bdev(0x563666621c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 11 09:14:17 compute-0 ceph-osd[82859]: bdev(0x563666621c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 11 09:14:17 compute-0 ceph-osd[82859]: bdev(0x563666621c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 11 09:14:17 compute-0 ceph-osd[82859]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 11 09:14:17 compute-0 ceph-osd[82859]: bdev(0x563666621c00 /var/lib/ceph/osd/ceph-1/block) close
Dec 11 09:14:17 compute-0 ceph-osd[82859]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Dec 11 09:14:17 compute-0 ceph-osd[82859]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Dec 11 09:14:17 compute-0 ceph-osd[82859]: bdev(0x563666621c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 11 09:14:17 compute-0 ceph-osd[82859]: bdev(0x563666621c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 11 09:14:17 compute-0 ceph-osd[82859]: bdev(0x563666621c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 11 09:14:17 compute-0 ceph-osd[82859]: bdev(0x563666621c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 11 09:14:17 compute-0 ceph-osd[82859]: bdev(0x563666621c00 /var/lib/ceph/osd/ceph-1/block) close
Dec 11 09:14:17 compute-0 ceph-osd[82859]: bdev(0x563666621c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 11 09:14:17 compute-0 ceph-osd[82859]: bdev(0x563666621c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 11 09:14:17 compute-0 ceph-osd[82859]: bdev(0x563666621c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 11 09:14:17 compute-0 ceph-osd[82859]: bdev(0x563666621c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 11 09:14:17 compute-0 ceph-osd[82859]: bdev(0x563666621c00 /var/lib/ceph/osd/ceph-1/block) close
Dec 11 09:14:17 compute-0 lvm[83124]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 11 09:14:17 compute-0 lvm[83124]: VG ceph_vg0 finished
Dec 11 09:14:17 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 11 09:14:17 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:17 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 11 09:14:17 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:18 compute-0 sweet_jang[83036]: {}
Dec 11 09:14:18 compute-0 systemd[1]: libpod-58160c5cb5900da147df30f5774f12fb632d434b94571b9aee209772ba826ffb.scope: Deactivated successfully.
Dec 11 09:14:18 compute-0 ceph-mon[74426]: pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:14:18 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:18 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:18 compute-0 systemd[1]: libpod-58160c5cb5900da147df30f5774f12fb632d434b94571b9aee209772ba826ffb.scope: Consumed 1.637s CPU time.
Dec 11 09:14:18 compute-0 ceph-osd[82859]: bdev(0x563666621c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 11 09:14:18 compute-0 ceph-osd[82859]: bdev(0x563666621c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 11 09:14:18 compute-0 ceph-osd[82859]: bdev(0x563666621c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 11 09:14:18 compute-0 ceph-osd[82859]: bdev(0x563666621c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 11 09:14:18 compute-0 ceph-osd[82859]: bdev(0x563666621c00 /var/lib/ceph/osd/ceph-1/block) close
Dec 11 09:14:18 compute-0 podman[83128]: 2025-12-11 09:14:18.159104307 +0000 UTC m=+0.033235556 container died 58160c5cb5900da147df30f5774f12fb632d434b94571b9aee209772ba826ffb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_jang, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:14:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b78016bd2eecc3a1170a979e28c1e2b95c9a3d1d94bdf62711053ea3ab0446c-merged.mount: Deactivated successfully.
Dec 11 09:14:18 compute-0 podman[83128]: 2025-12-11 09:14:18.208714037 +0000 UTC m=+0.082845276 container remove 58160c5cb5900da147df30f5774f12fb632d434b94571b9aee209772ba826ffb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_jang, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:14:18 compute-0 systemd[1]: libpod-conmon-58160c5cb5900da147df30f5774f12fb632d434b94571b9aee209772ba826ffb.scope: Deactivated successfully.
Dec 11 09:14:18 compute-0 sudo[82896]: pam_unix(sudo:session): session closed for user root
Dec 11 09:14:18 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 11 09:14:18 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:18 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 11 09:14:18 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:18 compute-0 sudo[83145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 11 09:14:18 compute-0 sudo[83145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:14:18 compute-0 sudo[83145]: pam_unix(sudo:session): session closed for user root
Dec 11 09:14:18 compute-0 ceph-osd[82859]: bdev(0x563666621c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 11 09:14:18 compute-0 ceph-osd[82859]: bdev(0x563666621c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 11 09:14:18 compute-0 ceph-osd[82859]: bdev(0x563666621c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 11 09:14:18 compute-0 ceph-osd[82859]: bdev(0x563666621c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 11 09:14:18 compute-0 ceph-osd[82859]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 11 09:14:18 compute-0 ceph-osd[82859]: bdev(0x563666800000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 11 09:14:18 compute-0 ceph-osd[82859]: bdev(0x563666800000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 11 09:14:18 compute-0 ceph-osd[82859]: bdev(0x563666800000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 11 09:14:18 compute-0 ceph-osd[82859]: bdev(0x563666800000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 11 09:14:18 compute-0 ceph-osd[82859]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Dec 11 09:14:18 compute-0 ceph-osd[82859]: bluefs mount
Dec 11 09:14:18 compute-0 ceph-osd[82859]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 11 09:14:18 compute-0 ceph-osd[82859]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 11 09:14:18 compute-0 ceph-osd[82859]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 11 09:14:18 compute-0 ceph-osd[82859]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 11 09:14:18 compute-0 ceph-osd[82859]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 11 09:14:18 compute-0 ceph-osd[82859]: bluefs mount shared_bdev_used = 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: RocksDB version: 7.9.2
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Git sha 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Compile date 2025-07-17 03:12:14
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: DB SUMMARY
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: DB Session ID:  E1KKO7X1C2VB9OE8D4E4
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: CURRENT file:  CURRENT
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: IDENTITY file:  IDENTITY
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                         Options.error_if_exists: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                       Options.create_if_missing: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                         Options.paranoid_checks: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                                     Options.env: 0x56366666ddc0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                                Options.info_log: 0x5636666717e0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.max_file_opening_threads: 16
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                              Options.statistics: (nil)
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                               Options.use_fsync: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                       Options.max_log_file_size: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                         Options.allow_fallocate: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                        Options.use_direct_reads: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.create_missing_column_families: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                              Options.db_log_dir: 
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                                 Options.wal_dir: db.wal
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.advise_random_on_open: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                    Options.write_buffer_manager: 0x563666764a00
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                            Options.rate_limiter: (nil)
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.unordered_write: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                               Options.row_cache: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                              Options.wal_filter: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.allow_ingest_behind: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.two_write_queues: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.manual_wal_flush: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.wal_compression: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.atomic_flush: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                 Options.log_readahead_size: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.allow_data_in_errors: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.db_host_id: __hostname__
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.max_background_jobs: 4
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.max_background_compactions: -1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.max_subcompactions: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                          Options.max_open_files: -1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                          Options.bytes_per_sync: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.max_background_flushes: -1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Compression algorithms supported:
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         kZSTD supported: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         kXpressCompression supported: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         kBZip2Compression supported: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         kLZ4Compression supported: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         kZlibCompression supported: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         kLZ4HCCompression supported: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         kSnappyCompression supported: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.compaction_filter: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.compaction_filter_factory: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:  Options.sst_partitioner_factory: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563666671ba0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56366588d350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.write_buffer_size: 16777216
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:  Options.max_write_buffer_number: 64
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.compression: LZ4
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:       Options.prefix_extractor: nullptr
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.num_levels: 7
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.compression_opts.level: 32767
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.compression_opts.strategy: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.compression_opts.enabled: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                        Options.arena_block_size: 1048576
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.disable_auto_compactions: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.inplace_update_support: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                           Options.bloom_locality: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                    Options.max_successive_merges: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.paranoid_file_checks: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.force_consistency_checks: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.report_bg_io_stats: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                               Options.ttl: 2592000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                       Options.enable_blob_files: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                           Options.min_blob_size: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                          Options.blob_file_size: 268435456
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.blob_file_starting_level: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:           Options.merge_operator: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.compaction_filter: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.compaction_filter_factory: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:  Options.sst_partitioner_factory: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563666671ba0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56366588d350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.write_buffer_size: 16777216
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:  Options.max_write_buffer_number: 64
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.compression: LZ4
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:       Options.prefix_extractor: nullptr
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.num_levels: 7
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.compression_opts.level: 32767
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.compression_opts.strategy: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.compression_opts.enabled: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                        Options.arena_block_size: 1048576
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.disable_auto_compactions: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.inplace_update_support: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                           Options.bloom_locality: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                    Options.max_successive_merges: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.paranoid_file_checks: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.force_consistency_checks: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.report_bg_io_stats: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                               Options.ttl: 2592000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                       Options.enable_blob_files: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                           Options.min_blob_size: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                          Options.blob_file_size: 268435456
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.blob_file_starting_level: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:           Options.merge_operator: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.compaction_filter: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.compaction_filter_factory: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:  Options.sst_partitioner_factory: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563666671ba0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56366588d350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.write_buffer_size: 16777216
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:  Options.max_write_buffer_number: 64
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.compression: LZ4
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:       Options.prefix_extractor: nullptr
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.num_levels: 7
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.compression_opts.level: 32767
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.compression_opts.strategy: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.compression_opts.enabled: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                        Options.arena_block_size: 1048576
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.disable_auto_compactions: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.inplace_update_support: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                           Options.bloom_locality: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                    Options.max_successive_merges: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.paranoid_file_checks: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.force_consistency_checks: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.report_bg_io_stats: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                               Options.ttl: 2592000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                       Options.enable_blob_files: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                           Options.min_blob_size: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                          Options.blob_file_size: 268435456
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.blob_file_starting_level: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:           Options.merge_operator: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.compaction_filter: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.compaction_filter_factory: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:  Options.sst_partitioner_factory: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563666671ba0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56366588d350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.write_buffer_size: 16777216
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:  Options.max_write_buffer_number: 64
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.compression: LZ4
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:       Options.prefix_extractor: nullptr
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.num_levels: 7
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.compression_opts.level: 32767
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.compression_opts.strategy: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.compression_opts.enabled: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                        Options.arena_block_size: 1048576
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.disable_auto_compactions: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.inplace_update_support: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                           Options.bloom_locality: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                    Options.max_successive_merges: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.paranoid_file_checks: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.force_consistency_checks: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.report_bg_io_stats: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                               Options.ttl: 2592000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                       Options.enable_blob_files: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                           Options.min_blob_size: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                          Options.blob_file_size: 268435456
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.blob_file_starting_level: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:           Options.merge_operator: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.compaction_filter: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.compaction_filter_factory: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:  Options.sst_partitioner_factory: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563666671ba0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56366588d350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.write_buffer_size: 16777216
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:  Options.max_write_buffer_number: 64
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.compression: LZ4
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:       Options.prefix_extractor: nullptr
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.num_levels: 7
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.compression_opts.level: 32767
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.compression_opts.strategy: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.compression_opts.enabled: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                        Options.arena_block_size: 1048576
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.disable_auto_compactions: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.inplace_update_support: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                           Options.bloom_locality: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                    Options.max_successive_merges: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.paranoid_file_checks: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.force_consistency_checks: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.report_bg_io_stats: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                               Options.ttl: 2592000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                       Options.enable_blob_files: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                           Options.min_blob_size: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                          Options.blob_file_size: 268435456
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.blob_file_starting_level: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:           Options.merge_operator: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.compaction_filter: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.compaction_filter_factory: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:  Options.sst_partitioner_factory: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563666671ba0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56366588d350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.write_buffer_size: 16777216
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:  Options.max_write_buffer_number: 64
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.compression: LZ4
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:       Options.prefix_extractor: nullptr
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.num_levels: 7
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.compression_opts.level: 32767
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.compression_opts.strategy: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.compression_opts.enabled: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                        Options.arena_block_size: 1048576
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.disable_auto_compactions: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.inplace_update_support: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                           Options.bloom_locality: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                    Options.max_successive_merges: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.paranoid_file_checks: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.force_consistency_checks: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.report_bg_io_stats: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                               Options.ttl: 2592000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                       Options.enable_blob_files: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                           Options.min_blob_size: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                          Options.blob_file_size: 268435456
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.blob_file_starting_level: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:           Options.merge_operator: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.compaction_filter: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.compaction_filter_factory: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:  Options.sst_partitioner_factory: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563666671ba0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56366588d350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.write_buffer_size: 16777216
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:  Options.max_write_buffer_number: 64
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.compression: LZ4
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:       Options.prefix_extractor: nullptr
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.num_levels: 7
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.compression_opts.level: 32767
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.compression_opts.strategy: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.compression_opts.enabled: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                        Options.arena_block_size: 1048576
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.disable_auto_compactions: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.inplace_update_support: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                           Options.bloom_locality: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                    Options.max_successive_merges: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.paranoid_file_checks: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.force_consistency_checks: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.report_bg_io_stats: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                               Options.ttl: 2592000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                       Options.enable_blob_files: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                           Options.min_blob_size: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                          Options.blob_file_size: 268435456
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.blob_file_starting_level: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:           Options.merge_operator: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.compaction_filter: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.compaction_filter_factory: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:  Options.sst_partitioner_factory: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563666671bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56366588c9b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.write_buffer_size: 16777216
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:  Options.max_write_buffer_number: 64
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.compression: LZ4
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:       Options.prefix_extractor: nullptr
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.num_levels: 7
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.compression_opts.level: 32767
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.compression_opts.strategy: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.compression_opts.enabled: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                        Options.arena_block_size: 1048576
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.disable_auto_compactions: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.inplace_update_support: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                           Options.bloom_locality: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                    Options.max_successive_merges: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.paranoid_file_checks: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.force_consistency_checks: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.report_bg_io_stats: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                               Options.ttl: 2592000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                       Options.enable_blob_files: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                           Options.min_blob_size: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                          Options.blob_file_size: 268435456
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.blob_file_starting_level: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:           Options.merge_operator: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.compaction_filter: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.compaction_filter_factory: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:  Options.sst_partitioner_factory: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563666671bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56366588c9b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.write_buffer_size: 16777216
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:  Options.max_write_buffer_number: 64
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.compression: LZ4
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:       Options.prefix_extractor: nullptr
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.num_levels: 7
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.compression_opts.level: 32767
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.compression_opts.strategy: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.compression_opts.enabled: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                        Options.arena_block_size: 1048576
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.disable_auto_compactions: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.inplace_update_support: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                           Options.bloom_locality: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                    Options.max_successive_merges: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.paranoid_file_checks: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.force_consistency_checks: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.report_bg_io_stats: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                               Options.ttl: 2592000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                       Options.enable_blob_files: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                           Options.min_blob_size: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                          Options.blob_file_size: 268435456
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.blob_file_starting_level: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:           Options.merge_operator: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.compaction_filter: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.compaction_filter_factory: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:  Options.sst_partitioner_factory: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563666671bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56366588c9b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.write_buffer_size: 16777216
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:  Options.max_write_buffer_number: 64
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.compression: LZ4
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:       Options.prefix_extractor: nullptr
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.num_levels: 7
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.compression_opts.level: 32767
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.compression_opts.strategy: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.compression_opts.enabled: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                        Options.arena_block_size: 1048576
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.disable_auto_compactions: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.inplace_update_support: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                           Options.bloom_locality: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                    Options.max_successive_merges: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.paranoid_file_checks: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.force_consistency_checks: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.report_bg_io_stats: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                               Options.ttl: 2592000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                       Options.enable_blob_files: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                           Options.min_blob_size: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                          Options.blob_file_size: 268435456
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.blob_file_starting_level: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: b9fddd49-3046-4aee-b8e5-a47f45e803d9
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765444458428801, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765444458429051, "job": 1, "event": "recovery_finished"}
Dec 11 09:14:18 compute-0 ceph-osd[82859]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Dec 11 09:14:18 compute-0 ceph-osd[82859]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Dec 11 09:14:18 compute-0 ceph-osd[82859]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Dec 11 09:14:18 compute-0 ceph-osd[82859]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: freelist init
Dec 11 09:14:18 compute-0 ceph-osd[82859]: freelist _read_cfg
Dec 11 09:14:18 compute-0 ceph-osd[82859]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec 11 09:14:18 compute-0 ceph-osd[82859]: bluefs umount
Dec 11 09:14:18 compute-0 ceph-osd[82859]: bdev(0x563666800000 /var/lib/ceph/osd/ceph-1/block) close
Dec 11 09:14:18 compute-0 sudo[83357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:14:18 compute-0 sudo[83357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:14:18 compute-0 sudo[83357]: pam_unix(sudo:session): session closed for user root
Dec 11 09:14:18 compute-0 sudo[83382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Dec 11 09:14:18 compute-0 sudo[83382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:14:18 compute-0 ceph-osd[82859]: bdev(0x563666800000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 11 09:14:18 compute-0 ceph-osd[82859]: bdev(0x563666800000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 11 09:14:18 compute-0 ceph-osd[82859]: bdev(0x563666800000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 11 09:14:18 compute-0 ceph-osd[82859]: bdev(0x563666800000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 11 09:14:18 compute-0 ceph-osd[82859]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Dec 11 09:14:18 compute-0 ceph-osd[82859]: bluefs mount
Dec 11 09:14:18 compute-0 ceph-osd[82859]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 11 09:14:18 compute-0 ceph-osd[82859]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 11 09:14:18 compute-0 ceph-osd[82859]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 11 09:14:18 compute-0 ceph-osd[82859]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 11 09:14:18 compute-0 ceph-osd[82859]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 11 09:14:18 compute-0 ceph-osd[82859]: bluefs mount shared_bdev_used = 4718592
Dec 11 09:14:18 compute-0 ceph-osd[82859]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: RocksDB version: 7.9.2
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Git sha 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Compile date 2025-07-17 03:12:14
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: DB SUMMARY
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: DB Session ID:  E1KKO7X1C2VB9OE8D4E5
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: CURRENT file:  CURRENT
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: IDENTITY file:  IDENTITY
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                         Options.error_if_exists: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                       Options.create_if_missing: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                         Options.paranoid_checks: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                                     Options.env: 0x56366680e310
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                                Options.info_log: 0x563666671960
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.max_file_opening_threads: 16
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                              Options.statistics: (nil)
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                               Options.use_fsync: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                       Options.max_log_file_size: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                         Options.allow_fallocate: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                        Options.use_direct_reads: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.create_missing_column_families: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                              Options.db_log_dir: 
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                                 Options.wal_dir: db.wal
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.advise_random_on_open: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                    Options.write_buffer_manager: 0x563666764a00
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                            Options.rate_limiter: (nil)
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.unordered_write: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                               Options.row_cache: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                              Options.wal_filter: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.allow_ingest_behind: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.two_write_queues: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.manual_wal_flush: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.wal_compression: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.atomic_flush: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                 Options.log_readahead_size: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.allow_data_in_errors: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.db_host_id: __hostname__
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.max_background_jobs: 4
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.max_background_compactions: -1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.max_subcompactions: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                          Options.max_open_files: -1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                          Options.bytes_per_sync: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.max_background_flushes: -1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Compression algorithms supported:
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         kZSTD supported: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         kXpressCompression supported: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         kBZip2Compression supported: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         kLZ4Compression supported: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         kZlibCompression supported: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         kLZ4HCCompression supported: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         kSnappyCompression supported: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.compaction_filter: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.compaction_filter_factory: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:  Options.sst_partitioner_factory: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5636666716c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56366588d350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.write_buffer_size: 16777216
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:  Options.max_write_buffer_number: 64
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.compression: LZ4
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:       Options.prefix_extractor: nullptr
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.num_levels: 7
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.compression_opts.level: 32767
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.compression_opts.strategy: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.compression_opts.enabled: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                        Options.arena_block_size: 1048576
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.disable_auto_compactions: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.inplace_update_support: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                           Options.bloom_locality: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                    Options.max_successive_merges: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.paranoid_file_checks: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.force_consistency_checks: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.report_bg_io_stats: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                               Options.ttl: 2592000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                       Options.enable_blob_files: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                           Options.min_blob_size: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                          Options.blob_file_size: 268435456
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.blob_file_starting_level: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:           Options.merge_operator: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.compaction_filter: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.compaction_filter_factory: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:  Options.sst_partitioner_factory: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5636666716c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56366588d350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.write_buffer_size: 16777216
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:  Options.max_write_buffer_number: 64
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.compression: LZ4
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:       Options.prefix_extractor: nullptr
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.num_levels: 7
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.compression_opts.level: 32767
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.compression_opts.strategy: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.compression_opts.enabled: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                        Options.arena_block_size: 1048576
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.disable_auto_compactions: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.inplace_update_support: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                           Options.bloom_locality: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                    Options.max_successive_merges: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.paranoid_file_checks: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.force_consistency_checks: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.report_bg_io_stats: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                               Options.ttl: 2592000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                       Options.enable_blob_files: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                           Options.min_blob_size: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                          Options.blob_file_size: 268435456
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.blob_file_starting_level: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:           Options.merge_operator: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.compaction_filter: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.compaction_filter_factory: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:  Options.sst_partitioner_factory: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5636666716c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56366588d350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.write_buffer_size: 16777216
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:  Options.max_write_buffer_number: 64
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.compression: LZ4
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:       Options.prefix_extractor: nullptr
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.num_levels: 7
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.compression_opts.level: 32767
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.compression_opts.strategy: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.compression_opts.enabled: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                        Options.arena_block_size: 1048576
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.disable_auto_compactions: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.inplace_update_support: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                           Options.bloom_locality: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                    Options.max_successive_merges: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.paranoid_file_checks: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.force_consistency_checks: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.report_bg_io_stats: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                               Options.ttl: 2592000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                       Options.enable_blob_files: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                           Options.min_blob_size: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                          Options.blob_file_size: 268435456
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.blob_file_starting_level: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:           Options.merge_operator: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.compaction_filter: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.compaction_filter_factory: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:  Options.sst_partitioner_factory: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5636666716c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56366588d350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.write_buffer_size: 16777216
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:  Options.max_write_buffer_number: 64
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.compression: LZ4
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:       Options.prefix_extractor: nullptr
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.num_levels: 7
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.compression_opts.level: 32767
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.compression_opts.strategy: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.compression_opts.enabled: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                        Options.arena_block_size: 1048576
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.disable_auto_compactions: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.inplace_update_support: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                           Options.bloom_locality: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                    Options.max_successive_merges: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.paranoid_file_checks: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.force_consistency_checks: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.report_bg_io_stats: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                               Options.ttl: 2592000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                       Options.enable_blob_files: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                           Options.min_blob_size: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                          Options.blob_file_size: 268435456
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.blob_file_starting_level: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:           Options.merge_operator: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.compaction_filter: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.compaction_filter_factory: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:  Options.sst_partitioner_factory: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5636666716c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56366588d350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.write_buffer_size: 16777216
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:  Options.max_write_buffer_number: 64
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.compression: LZ4
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:       Options.prefix_extractor: nullptr
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.num_levels: 7
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.compression_opts.level: 32767
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.compression_opts.strategy: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.compression_opts.enabled: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                        Options.arena_block_size: 1048576
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.disable_auto_compactions: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.inplace_update_support: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                           Options.bloom_locality: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                    Options.max_successive_merges: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.paranoid_file_checks: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.force_consistency_checks: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.report_bg_io_stats: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                               Options.ttl: 2592000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                       Options.enable_blob_files: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                           Options.min_blob_size: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                          Options.blob_file_size: 268435456
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.blob_file_starting_level: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:           Options.merge_operator: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.compaction_filter: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.compaction_filter_factory: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:  Options.sst_partitioner_factory: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5636666716c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56366588d350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.write_buffer_size: 16777216
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:  Options.max_write_buffer_number: 64
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.compression: LZ4
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:       Options.prefix_extractor: nullptr
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.num_levels: 7
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.compression_opts.level: 32767
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.compression_opts.strategy: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.compression_opts.enabled: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                        Options.arena_block_size: 1048576
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.disable_auto_compactions: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.inplace_update_support: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                           Options.bloom_locality: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                    Options.max_successive_merges: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.paranoid_file_checks: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.force_consistency_checks: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.report_bg_io_stats: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                               Options.ttl: 2592000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                       Options.enable_blob_files: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                           Options.min_blob_size: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                          Options.blob_file_size: 268435456
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.blob_file_starting_level: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:           Options.merge_operator: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.compaction_filter: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.compaction_filter_factory: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:  Options.sst_partitioner_factory: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5636666716c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56366588d350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.write_buffer_size: 16777216
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:  Options.max_write_buffer_number: 64
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.compression: LZ4
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:       Options.prefix_extractor: nullptr
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.num_levels: 7
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.compression_opts.level: 32767
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.compression_opts.strategy: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.compression_opts.enabled: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                        Options.arena_block_size: 1048576
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.disable_auto_compactions: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.inplace_update_support: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                           Options.bloom_locality: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                    Options.max_successive_merges: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.paranoid_file_checks: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.force_consistency_checks: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.report_bg_io_stats: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                               Options.ttl: 2592000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                       Options.enable_blob_files: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                           Options.min_blob_size: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                          Options.blob_file_size: 268435456
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.blob_file_starting_level: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:           Options.merge_operator: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.compaction_filter: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.compaction_filter_factory: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:  Options.sst_partitioner_factory: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563666671b00)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56366588c9b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.write_buffer_size: 16777216
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:  Options.max_write_buffer_number: 64
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.compression: LZ4
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:       Options.prefix_extractor: nullptr
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.num_levels: 7
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.compression_opts.level: 32767
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.compression_opts.strategy: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.compression_opts.enabled: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                        Options.arena_block_size: 1048576
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.disable_auto_compactions: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.inplace_update_support: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                           Options.bloom_locality: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                    Options.max_successive_merges: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.paranoid_file_checks: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.force_consistency_checks: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.report_bg_io_stats: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                               Options.ttl: 2592000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                       Options.enable_blob_files: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                           Options.min_blob_size: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                          Options.blob_file_size: 268435456
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.blob_file_starting_level: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:           Options.merge_operator: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.compaction_filter: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.compaction_filter_factory: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:  Options.sst_partitioner_factory: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563666671b00)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56366588c9b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.write_buffer_size: 16777216
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:  Options.max_write_buffer_number: 64
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.compression: LZ4
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:       Options.prefix_extractor: nullptr
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.num_levels: 7
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.compression_opts.level: 32767
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.compression_opts.strategy: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.compression_opts.enabled: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                        Options.arena_block_size: 1048576
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.disable_auto_compactions: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.inplace_update_support: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                           Options.bloom_locality: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                    Options.max_successive_merges: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.paranoid_file_checks: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.force_consistency_checks: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.report_bg_io_stats: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                               Options.ttl: 2592000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                       Options.enable_blob_files: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                           Options.min_blob_size: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                          Options.blob_file_size: 268435456
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.blob_file_starting_level: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:           Options.merge_operator: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.compaction_filter: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.compaction_filter_factory: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:  Options.sst_partitioner_factory: None
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563666671b00)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56366588c9b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.write_buffer_size: 16777216
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:  Options.max_write_buffer_number: 64
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.compression: LZ4
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:       Options.prefix_extractor: nullptr
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.num_levels: 7
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.compression_opts.level: 32767
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.compression_opts.strategy: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                  Options.compression_opts.enabled: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                        Options.arena_block_size: 1048576
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.disable_auto_compactions: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.inplace_update_support: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                           Options.bloom_locality: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                    Options.max_successive_merges: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.paranoid_file_checks: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.force_consistency_checks: 1
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.report_bg_io_stats: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                               Options.ttl: 2592000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                       Options.enable_blob_files: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                           Options.min_blob_size: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                          Options.blob_file_size: 268435456
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb:                Options.blob_file_starting_level: 0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 11 09:14:18 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: b9fddd49-3046-4aee-b8e5-a47f45e803d9
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765444458719429, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765444458838023, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765444458, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b9fddd49-3046-4aee-b8e5-a47f45e803d9", "db_session_id": "E1KKO7X1C2VB9OE8D4E5", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765444458841982, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765444458, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b9fddd49-3046-4aee-b8e5-a47f45e803d9", "db_session_id": "E1KKO7X1C2VB9OE8D4E5", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765444458846278, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765444458, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b9fddd49-3046-4aee-b8e5-a47f45e803d9", "db_session_id": "E1KKO7X1C2VB9OE8D4E5", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765444458848097, "job": 1, "event": "recovery_finished"}
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x563666872000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: DB pointer 0x56366681c000
Dec 11 09:14:18 compute-0 ceph-osd[82859]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 11 09:14:18 compute-0 ceph-osd[82859]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Dec 11 09:14:18 compute-0 ceph-osd[82859]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 11 09:14:18 compute-0 ceph-osd[82859]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.12              0.00         1    0.119       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.12              0.00         1    0.119       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.12              0.00         1    0.119       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.12              0.00         1    0.119       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56366588d350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56366588d350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56366588d350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56366588d350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56366588d350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56366588d350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56366588d350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56366588c9b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56366588c9b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56366588c9b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56366588d350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56366588d350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 11 09:14:18 compute-0 ceph-osd[82859]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Dec 11 09:14:18 compute-0 ceph-osd[82859]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/hello/cls_hello.cc:316: loading cls_hello
Dec 11 09:14:18 compute-0 ceph-osd[82859]: _get_class not permitted to load lua
Dec 11 09:14:18 compute-0 ceph-osd[82859]: _get_class not permitted to load sdk
Dec 11 09:14:18 compute-0 ceph-osd[82859]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Dec 11 09:14:18 compute-0 ceph-osd[82859]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Dec 11 09:14:18 compute-0 ceph-osd[82859]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Dec 11 09:14:18 compute-0 ceph-osd[82859]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Dec 11 09:14:18 compute-0 ceph-osd[82859]: osd.1 0 load_pgs
Dec 11 09:14:18 compute-0 ceph-osd[82859]: osd.1 0 load_pgs opened 0 pgs
Dec 11 09:14:18 compute-0 ceph-osd[82859]: osd.1 0 log_to_monitors true
Dec 11 09:14:18 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-osd-1[82855]: 2025-12-11T09:14:18.880+0000 7f5b37187740 -1 osd.1 0 log_to_monitors true
Dec 11 09:14:18 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0)
Dec 11 09:14:18 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/2940598697,v1:192.168.122.100:6803/2940598697]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Dec 11 09:14:18 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:14:18 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 11 09:14:19 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:19 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0)
Dec 11 09:14:19 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/676261534,v1:192.168.122.101:6801/676261534]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Dec 11 09:14:19 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:19 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:19 compute-0 ceph-mon[74426]: from='osd.1 [v2:192.168.122.100:6802/2940598697,v1:192.168.122.100:6803/2940598697]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Dec 11 09:14:19 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:19 compute-0 ceph-mon[74426]: from='osd.0 [v2:192.168.122.101:6800/676261534,v1:192.168.122.101:6801/676261534]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Dec 11 09:14:19 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Dec 11 09:14:19 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 11 09:14:19 compute-0 podman[83694]: 2025-12-11 09:14:19.292503911 +0000 UTC m=+0.098876201 container exec 9c08b5b1828392d2cc014af7ec9c415c9c915d4b6dd9798da9403e6929851bd4 (image=quay.io/ceph/ceph:v19, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mon-compute-0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec 11 09:14:19 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/2940598697,v1:192.168.122.100:6803/2940598697]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Dec 11 09:14:19 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/676261534,v1:192.168.122.101:6801/676261534]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Dec 11 09:14:19 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e6 e6: 2 total, 0 up, 2 in
Dec 11 09:14:19 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e6: 2 total, 0 up, 2 in
Dec 11 09:14:19 compute-0 podman[83694]: 2025-12-11 09:14:19.395326114 +0000 UTC m=+0.201698404 container exec_died 9c08b5b1828392d2cc014af7ec9c415c9c915d4b6dd9798da9403e6929851bd4 (image=quay.io/ceph/ceph:v19, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Dec 11 09:14:19 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Dec 11 09:14:19 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/2940598697,v1:192.168.122.100:6803/2940598697]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Dec 11 09:14:19 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e6 create-or-move crush item name 'osd.1' initial_weight 0.0195 at location {host=compute-0,root=default}
Dec 11 09:14:19 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]} v 0)
Dec 11 09:14:19 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/676261534,v1:192.168.122.101:6801/676261534]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Dec 11 09:14:19 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e6 create-or-move crush item name 'osd.0' initial_weight 0.0195 at location {host=compute-1,root=default}
Dec 11 09:14:19 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 11 09:14:19 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 11 09:14:19 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 11 09:14:19 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 11 09:14:19 compute-0 ceph-mgr[74715]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 11 09:14:19 compute-0 ceph-mgr[74715]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 11 09:14:19 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 11 09:14:19 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:19 compute-0 sudo[83382]: pam_unix(sudo:session): session closed for user root
Dec 11 09:14:19 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 11 09:14:19 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:19 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 11 09:14:19 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:19 compute-0 sudo[83780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:14:19 compute-0 sudo[83780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:14:19 compute-0 sudo[83780]: pam_unix(sudo:session): session closed for user root
Dec 11 09:14:19 compute-0 sudo[83805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 11 09:14:19 compute-0 sudo[83805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:14:19 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Dec 11 09:14:19 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Dec 11 09:14:19 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 11 09:14:19 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:19 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 11 09:14:19 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:20 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Dec 11 09:14:20 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 11 09:14:20 compute-0 sudo[83805]: pam_unix(sudo:session): session closed for user root
Dec 11 09:14:20 compute-0 sudo[83861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:14:20 compute-0 sudo[83861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:14:20 compute-0 sudo[83861]: pam_unix(sudo:session): session closed for user root
Dec 11 09:14:20 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/2940598697,v1:192.168.122.100:6803/2940598697]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec 11 09:14:20 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/676261534,v1:192.168.122.101:6801/676261534]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Dec 11 09:14:20 compute-0 ceph-osd[82859]: osd.1 0 done with init, starting boot process
Dec 11 09:14:20 compute-0 ceph-osd[82859]: osd.1 0 start_boot
Dec 11 09:14:20 compute-0 ceph-osd[82859]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Dec 11 09:14:20 compute-0 ceph-osd[82859]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Dec 11 09:14:20 compute-0 ceph-osd[82859]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Dec 11 09:14:20 compute-0 ceph-osd[82859]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Dec 11 09:14:20 compute-0 ceph-osd[82859]: osd.1 0  bench count 12288000 bsize 4 KiB
Dec 11 09:14:20 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e7 e7: 2 total, 0 up, 2 in
Dec 11 09:14:20 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e7: 2 total, 0 up, 2 in
Dec 11 09:14:20 compute-0 sudo[83886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -- inventory --format=json-pretty --filter-for-batch
Dec 11 09:14:20 compute-0 sudo[83886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:14:20 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 11 09:14:20 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 11 09:14:20 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 11 09:14:20 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 11 09:14:20 compute-0 ceph-mgr[74715]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 11 09:14:20 compute-0 ceph-mgr[74715]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 11 09:14:20 compute-0 ceph-mgr[74715]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2940598697; not ready for session (expect reconnect)
Dec 11 09:14:20 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 11 09:14:20 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 11 09:14:20 compute-0 ceph-mgr[74715]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 11 09:14:20 compute-0 ceph-mgr[74715]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/676261534; not ready for session (expect reconnect)
Dec 11 09:14:20 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 11 09:14:20 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 11 09:14:20 compute-0 ceph-mgr[74715]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 11 09:14:20 compute-0 ceph-mon[74426]: pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:14:20 compute-0 ceph-mon[74426]: from='osd.1 [v2:192.168.122.100:6802/2940598697,v1:192.168.122.100:6803/2940598697]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Dec 11 09:14:20 compute-0 ceph-mon[74426]: from='osd.0 [v2:192.168.122.101:6800/676261534,v1:192.168.122.101:6801/676261534]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Dec 11 09:14:20 compute-0 ceph-mon[74426]: osdmap e6: 2 total, 0 up, 2 in
Dec 11 09:14:20 compute-0 ceph-mon[74426]: from='osd.1 [v2:192.168.122.100:6802/2940598697,v1:192.168.122.100:6803/2940598697]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Dec 11 09:14:20 compute-0 ceph-mon[74426]: from='osd.0 [v2:192.168.122.101:6800/676261534,v1:192.168.122.101:6801/676261534]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Dec 11 09:14:20 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 11 09:14:20 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 11 09:14:20 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:20 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:20 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:20 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:20 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:20 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v44: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:14:20 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 11 09:14:20 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:20 compute-0 podman[83947]: 2025-12-11 09:14:20.946411293 +0000 UTC m=+0.050544190 container create b8410f08b7f8bf370900fdb9681032c09a029283e7b43ae9879b0e2ad741c1d8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_visvesvaraya, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 11 09:14:21 compute-0 systemd[1]: Started libpod-conmon-b8410f08b7f8bf370900fdb9681032c09a029283e7b43ae9879b0e2ad741c1d8.scope.
Dec 11 09:14:21 compute-0 podman[83947]: 2025-12-11 09:14:20.927290052 +0000 UTC m=+0.031422979 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:14:21 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:14:21 compute-0 podman[83947]: 2025-12-11 09:14:21.066592343 +0000 UTC m=+0.170725250 container init b8410f08b7f8bf370900fdb9681032c09a029283e7b43ae9879b0e2ad741c1d8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_visvesvaraya, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 11 09:14:21 compute-0 podman[83947]: 2025-12-11 09:14:21.126405184 +0000 UTC m=+0.230538081 container start b8410f08b7f8bf370900fdb9681032c09a029283e7b43ae9879b0e2ad741c1d8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_visvesvaraya, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 11 09:14:21 compute-0 hungry_visvesvaraya[83963]: 167 167
Dec 11 09:14:21 compute-0 systemd[1]: libpod-b8410f08b7f8bf370900fdb9681032c09a029283e7b43ae9879b0e2ad741c1d8.scope: Deactivated successfully.
Dec 11 09:14:21 compute-0 conmon[83963]: conmon b8410f08b7f8bf370900 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b8410f08b7f8bf370900fdb9681032c09a029283e7b43ae9879b0e2ad741c1d8.scope/container/memory.events
Dec 11 09:14:21 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] scanning for idle connections..
Dec 11 09:14:21 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] cleaning up connections: []
Dec 11 09:14:21 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] scanning for idle connections..
Dec 11 09:14:21 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] cleaning up connections: []
Dec 11 09:14:21 compute-0 podman[83947]: 2025-12-11 09:14:21.144241015 +0000 UTC m=+0.248373912 container attach b8410f08b7f8bf370900fdb9681032c09a029283e7b43ae9879b0e2ad741c1d8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 11 09:14:21 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] scanning for idle connections..
Dec 11 09:14:21 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] cleaning up connections: []
Dec 11 09:14:21 compute-0 podman[83947]: 2025-12-11 09:14:21.144900205 +0000 UTC m=+0.249033112 container died b8410f08b7f8bf370900fdb9681032c09a029283e7b43ae9879b0e2ad741c1d8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_visvesvaraya, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:14:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-ceb1715a3573b80a7a52c9c242fa4810f389e113242136846221980835d11db9-merged.mount: Deactivated successfully.
Dec 11 09:14:21 compute-0 podman[83947]: 2025-12-11 09:14:21.276849675 +0000 UTC m=+0.380982572 container remove b8410f08b7f8bf370900fdb9681032c09a029283e7b43ae9879b0e2ad741c1d8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 11 09:14:21 compute-0 systemd[1]: libpod-conmon-b8410f08b7f8bf370900fdb9681032c09a029283e7b43ae9879b0e2ad741c1d8.scope: Deactivated successfully.
Dec 11 09:14:21 compute-0 podman[83989]: 2025-12-11 09:14:21.446344006 +0000 UTC m=+0.048454865 container create a633317d8d1b96348c11107e30f14dda4045ad1d64f87110445369db872f9b25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_booth, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:14:21 compute-0 systemd[1]: Started libpod-conmon-a633317d8d1b96348c11107e30f14dda4045ad1d64f87110445369db872f9b25.scope.
Dec 11 09:14:21 compute-0 podman[83989]: 2025-12-11 09:14:21.42423179 +0000 UTC m=+0.026342669 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:14:21 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:14:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dd7a95d5c3d6436163f81ab84b013097f5467573cc6d7ad1c050dc55ab7c032/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dd7a95d5c3d6436163f81ab84b013097f5467573cc6d7ad1c050dc55ab7c032/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dd7a95d5c3d6436163f81ab84b013097f5467573cc6d7ad1c050dc55ab7c032/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dd7a95d5c3d6436163f81ab84b013097f5467573cc6d7ad1c050dc55ab7c032/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:21 compute-0 ceph-mgr[74715]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2940598697; not ready for session (expect reconnect)
Dec 11 09:14:21 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 11 09:14:21 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 11 09:14:21 compute-0 ceph-mgr[74715]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/676261534; not ready for session (expect reconnect)
Dec 11 09:14:21 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 11 09:14:21 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 11 09:14:21 compute-0 ceph-mgr[74715]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 11 09:14:21 compute-0 ceph-mgr[74715]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 11 09:14:21 compute-0 podman[83989]: 2025-12-11 09:14:21.584688356 +0000 UTC m=+0.186799245 container init a633317d8d1b96348c11107e30f14dda4045ad1d64f87110445369db872f9b25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_booth, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec 11 09:14:21 compute-0 podman[83989]: 2025-12-11 09:14:21.594085822 +0000 UTC m=+0.196196691 container start a633317d8d1b96348c11107e30f14dda4045ad1d64f87110445369db872f9b25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_booth, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:14:21 compute-0 podman[83989]: 2025-12-11 09:14:21.625395837 +0000 UTC m=+0.227506706 container attach a633317d8d1b96348c11107e30f14dda4045ad1d64f87110445369db872f9b25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True)
Dec 11 09:14:21 compute-0 ceph-mon[74426]: from='osd.1 [v2:192.168.122.100:6802/2940598697,v1:192.168.122.100:6803/2940598697]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec 11 09:14:21 compute-0 ceph-mon[74426]: from='osd.0 [v2:192.168.122.101:6800/676261534,v1:192.168.122.101:6801/676261534]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Dec 11 09:14:21 compute-0 ceph-mon[74426]: osdmap e7: 2 total, 0 up, 2 in
Dec 11 09:14:21 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 11 09:14:21 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 11 09:14:21 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 11 09:14:21 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 11 09:14:21 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:21 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 11 09:14:21 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 11 09:14:22 compute-0 lucid_booth[84005]: [
Dec 11 09:14:22 compute-0 lucid_booth[84005]:     {
Dec 11 09:14:22 compute-0 lucid_booth[84005]:         "available": false,
Dec 11 09:14:22 compute-0 lucid_booth[84005]:         "being_replaced": false,
Dec 11 09:14:22 compute-0 lucid_booth[84005]:         "ceph_device_lvm": false,
Dec 11 09:14:22 compute-0 lucid_booth[84005]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 11 09:14:22 compute-0 lucid_booth[84005]:         "lsm_data": {},
Dec 11 09:14:22 compute-0 lucid_booth[84005]:         "lvs": [],
Dec 11 09:14:22 compute-0 lucid_booth[84005]:         "path": "/dev/sr0",
Dec 11 09:14:22 compute-0 lucid_booth[84005]:         "rejected_reasons": [
Dec 11 09:14:22 compute-0 lucid_booth[84005]:             "Has a FileSystem",
Dec 11 09:14:22 compute-0 lucid_booth[84005]:             "Insufficient space (<5GB)"
Dec 11 09:14:22 compute-0 lucid_booth[84005]:         ],
Dec 11 09:14:22 compute-0 lucid_booth[84005]:         "sys_api": {
Dec 11 09:14:22 compute-0 lucid_booth[84005]:             "actuators": null,
Dec 11 09:14:22 compute-0 lucid_booth[84005]:             "device_nodes": [
Dec 11 09:14:22 compute-0 lucid_booth[84005]:                 "sr0"
Dec 11 09:14:22 compute-0 lucid_booth[84005]:             ],
Dec 11 09:14:22 compute-0 lucid_booth[84005]:             "devname": "sr0",
Dec 11 09:14:22 compute-0 lucid_booth[84005]:             "human_readable_size": "482.00 KB",
Dec 11 09:14:22 compute-0 lucid_booth[84005]:             "id_bus": "ata",
Dec 11 09:14:22 compute-0 lucid_booth[84005]:             "model": "QEMU DVD-ROM",
Dec 11 09:14:22 compute-0 lucid_booth[84005]:             "nr_requests": "2",
Dec 11 09:14:22 compute-0 lucid_booth[84005]:             "parent": "/dev/sr0",
Dec 11 09:14:22 compute-0 lucid_booth[84005]:             "partitions": {},
Dec 11 09:14:22 compute-0 lucid_booth[84005]:             "path": "/dev/sr0",
Dec 11 09:14:22 compute-0 lucid_booth[84005]:             "removable": "1",
Dec 11 09:14:22 compute-0 lucid_booth[84005]:             "rev": "2.5+",
Dec 11 09:14:22 compute-0 lucid_booth[84005]:             "ro": "0",
Dec 11 09:14:22 compute-0 lucid_booth[84005]:             "rotational": "1",
Dec 11 09:14:22 compute-0 lucid_booth[84005]:             "sas_address": "",
Dec 11 09:14:22 compute-0 lucid_booth[84005]:             "sas_device_handle": "",
Dec 11 09:14:22 compute-0 lucid_booth[84005]:             "scheduler_mode": "mq-deadline",
Dec 11 09:14:22 compute-0 lucid_booth[84005]:             "sectors": 0,
Dec 11 09:14:22 compute-0 lucid_booth[84005]:             "sectorsize": "2048",
Dec 11 09:14:22 compute-0 lucid_booth[84005]:             "size": 493568.0,
Dec 11 09:14:22 compute-0 lucid_booth[84005]:             "support_discard": "2048",
Dec 11 09:14:22 compute-0 lucid_booth[84005]:             "type": "disk",
Dec 11 09:14:22 compute-0 lucid_booth[84005]:             "vendor": "QEMU"
Dec 11 09:14:22 compute-0 lucid_booth[84005]:         }
Dec 11 09:14:22 compute-0 lucid_booth[84005]:     }
Dec 11 09:14:22 compute-0 lucid_booth[84005]: ]
Dec 11 09:14:22 compute-0 systemd[1]: libpod-a633317d8d1b96348c11107e30f14dda4045ad1d64f87110445369db872f9b25.scope: Deactivated successfully.
Dec 11 09:14:22 compute-0 podman[83989]: 2025-12-11 09:14:22.51428363 +0000 UTC m=+1.116394519 container died a633317d8d1b96348c11107e30f14dda4045ad1d64f87110445369db872f9b25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_booth, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True)
Dec 11 09:14:22 compute-0 ceph-mgr[74715]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2940598697; not ready for session (expect reconnect)
Dec 11 09:14:22 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 11 09:14:22 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 11 09:14:22 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 11 09:14:22 compute-0 ceph-mgr[74715]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 11 09:14:22 compute-0 ceph-mgr[74715]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/676261534; not ready for session (expect reconnect)
Dec 11 09:14:22 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 11 09:14:22 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 11 09:14:22 compute-0 ceph-mgr[74715]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 11 09:14:22 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:22 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 11 09:14:22 compute-0 ceph-mon[74426]: purged_snaps scrub starts
Dec 11 09:14:22 compute-0 ceph-mon[74426]: purged_snaps scrub ok
Dec 11 09:14:22 compute-0 ceph-mon[74426]: purged_snaps scrub starts
Dec 11 09:14:22 compute-0 ceph-mon[74426]: purged_snaps scrub ok
Dec 11 09:14:22 compute-0 ceph-mon[74426]: pgmap v44: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:14:22 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 11 09:14:22 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 11 09:14:22 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:22 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 11 09:14:22 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:22 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 11 09:14:22 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:22 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Dec 11 09:14:22 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Dec 11 09:14:22 compute-0 ceph-mgr[74715]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to  5247M
Dec 11 09:14:22 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to  5247M
Dec 11 09:14:22 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec 11 09:14:22 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:22 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v45: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:14:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-8dd7a95d5c3d6436163f81ab84b013097f5467573cc6d7ad1c050dc55ab7c032-merged.mount: Deactivated successfully.
Dec 11 09:14:22 compute-0 podman[83989]: 2025-12-11 09:14:22.777608141 +0000 UTC m=+1.379719010 container remove a633317d8d1b96348c11107e30f14dda4045ad1d64f87110445369db872f9b25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325)
Dec 11 09:14:22 compute-0 systemd[1]: libpod-conmon-a633317d8d1b96348c11107e30f14dda4045ad1d64f87110445369db872f9b25.scope: Deactivated successfully.
Dec 11 09:14:22 compute-0 sudo[83886]: pam_unix(sudo:session): session closed for user root
Dec 11 09:14:22 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 11 09:14:22 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:22 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 11 09:14:22 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:22 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 11 09:14:22 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:22 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 11 09:14:22 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:22 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Dec 11 09:14:22 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Dec 11 09:14:22 compute-0 ceph-mgr[74715]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 127.9M
Dec 11 09:14:22 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 127.9M
Dec 11 09:14:22 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec 11 09:14:22 compute-0 ceph-mgr[74715]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 134203392: error parsing value: Value '134203392' is below minimum 939524096
Dec 11 09:14:22 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 134203392: error parsing value: Value '134203392' is below minimum 939524096
Dec 11 09:14:23 compute-0 ceph-mgr[74715]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2940598697; not ready for session (expect reconnect)
Dec 11 09:14:23 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 11 09:14:23 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 11 09:14:23 compute-0 ceph-mgr[74715]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 11 09:14:23 compute-0 ceph-mgr[74715]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/676261534; not ready for session (expect reconnect)
Dec 11 09:14:23 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 11 09:14:23 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 11 09:14:23 compute-0 ceph-mgr[74715]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 11 09:14:23 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:23 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:23 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:23 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:23 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Dec 11 09:14:23 compute-0 ceph-mon[74426]: Adjusting osd_memory_target on compute-1 to  5247M
Dec 11 09:14:23 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:23 compute-0 ceph-mon[74426]: pgmap v45: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:14:23 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:23 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:23 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:23 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:23 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Dec 11 09:14:23 compute-0 ceph-mon[74426]: Adjusting osd_memory_target on compute-0 to 127.9M
Dec 11 09:14:23 compute-0 ceph-mon[74426]: Unable to set osd_memory_target on compute-0 to 134203392: error parsing value: Value '134203392' is below minimum 939524096
Dec 11 09:14:23 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 11 09:14:23 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 11 09:14:23 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e7 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:14:24 compute-0 ceph-mgr[74715]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2940598697; not ready for session (expect reconnect)
Dec 11 09:14:24 compute-0 ceph-mgr[74715]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/676261534; not ready for session (expect reconnect)
Dec 11 09:14:24 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 11 09:14:24 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 11 09:14:24 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 11 09:14:24 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 11 09:14:24 compute-0 ceph-mgr[74715]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 11 09:14:24 compute-0 ceph-mgr[74715]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 11 09:14:24 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 11 09:14:24 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 11 09:14:24 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v46: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:14:25 compute-0 ceph-mgr[74715]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2940598697; not ready for session (expect reconnect)
Dec 11 09:14:25 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 11 09:14:25 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 11 09:14:25 compute-0 ceph-mgr[74715]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 11 09:14:25 compute-0 ceph-mgr[74715]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/676261534; not ready for session (expect reconnect)
Dec 11 09:14:25 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 11 09:14:25 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 11 09:14:25 compute-0 ceph-mgr[74715]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 11 09:14:26 compute-0 ceph-mgr[74715]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2940598697; not ready for session (expect reconnect)
Dec 11 09:14:26 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 11 09:14:26 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 11 09:14:26 compute-0 ceph-mgr[74715]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 11 09:14:26 compute-0 ceph-mgr[74715]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/676261534; not ready for session (expect reconnect)
Dec 11 09:14:26 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 11 09:14:26 compute-0 ceph-mgr[74715]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 11 09:14:26 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 11 09:14:26 compute-0 ceph-mon[74426]: pgmap v46: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:14:26 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 11 09:14:26 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 11 09:14:26 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v47: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:14:27 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Dec 11 09:14:27 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 11 09:14:27 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e8 e8: 2 total, 1 up, 2 in
Dec 11 09:14:27 compute-0 ceph-mon[74426]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.101:6800/676261534,v1:192.168.122.101:6801/676261534] boot
Dec 11 09:14:27 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e8: 2 total, 1 up, 2 in
Dec 11 09:14:27 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 11 09:14:27 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 11 09:14:27 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 11 09:14:27 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 11 09:14:27 compute-0 ceph-mgr[74715]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 11 09:14:27 compute-0 ceph-mgr[74715]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2940598697; not ready for session (expect reconnect)
Dec 11 09:14:27 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 11 09:14:27 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 11 09:14:27 compute-0 ceph-mgr[74715]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 11 09:14:27 compute-0 ceph-mon[74426]: OSD bench result of 6842.182499 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 11 09:14:27 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 11 09:14:27 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 11 09:14:27 compute-0 ceph-mon[74426]: pgmap v47: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 11 09:14:27 compute-0 ceph-mon[74426]: osd.0 [v2:192.168.122.101:6800/676261534,v1:192.168.122.101:6801/676261534] boot
Dec 11 09:14:27 compute-0 ceph-mon[74426]: osdmap e8: 2 total, 1 up, 2 in
Dec 11 09:14:27 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 11 09:14:27 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 11 09:14:27 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 11 09:14:27 compute-0 ceph-osd[82859]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 22.616 iops: 5789.791 elapsed_sec: 0.518
Dec 11 09:14:27 compute-0 ceph-osd[82859]: log_channel(cluster) log [WRN] : OSD bench result of 5789.791041 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 11 09:14:27 compute-0 ceph-osd[82859]: osd.1 0 waiting for initial osdmap
Dec 11 09:14:27 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-osd-1[82855]: 2025-12-11T09:14:27.752+0000 7f5b3310a640 -1 osd.1 0 waiting for initial osdmap
Dec 11 09:14:27 compute-0 ceph-osd[82859]: osd.1 8 crush map has features 288514050185494528, adjusting msgr requires for clients
Dec 11 09:14:27 compute-0 ceph-osd[82859]: osd.1 8 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Dec 11 09:14:27 compute-0 ceph-osd[82859]: osd.1 8 crush map has features 3314932999778484224, adjusting msgr requires for osds
Dec 11 09:14:27 compute-0 ceph-osd[82859]: osd.1 8 check_osdmap_features require_osd_release unknown -> squid
Dec 11 09:14:27 compute-0 ceph-osd[82859]: osd.1 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 11 09:14:27 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-osd-1[82855]: 2025-12-11T09:14:27.773+0000 7f5b2e732640 -1 osd.1 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 11 09:14:27 compute-0 ceph-osd[82859]: osd.1 8 set_numa_affinity not setting numa affinity
Dec 11 09:14:27 compute-0 ceph-osd[82859]: osd.1 8 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Dec 11 09:14:28 compute-0 ceph-mgr[74715]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2940598697; not ready for session (expect reconnect)
Dec 11 09:14:28 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 11 09:14:28 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 11 09:14:28 compute-0 ceph-mgr[74715]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 11 09:14:28 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Dec 11 09:14:28 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 11 09:14:28 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e9 e9: 2 total, 2 up, 2 in
Dec 11 09:14:28 compute-0 ceph-mon[74426]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6802/2940598697,v1:192.168.122.100:6803/2940598697] boot
Dec 11 09:14:28 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e9: 2 total, 2 up, 2 in
Dec 11 09:14:28 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 11 09:14:28 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 11 09:14:28 compute-0 ceph-mon[74426]: OSD bench result of 5789.791041 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 11 09:14:28 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 11 09:14:28 compute-0 ceph-osd[82859]: osd.1 9 state: booting -> active
Dec 11 09:14:28 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v50: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Dec 11 09:14:28 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:14:29 compute-0 ceph-mgr[74715]: [devicehealth INFO root] creating mgr pool
Dec 11 09:14:29 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0)
Dec 11 09:14:29 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Dec 11 09:14:29 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Dec 11 09:14:29 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 11 09:14:29 compute-0 ceph-mon[74426]: osd.1 [v2:192.168.122.100:6802/2940598697,v1:192.168.122.100:6803/2940598697] boot
Dec 11 09:14:29 compute-0 ceph-mon[74426]: osdmap e9: 2 total, 2 up, 2 in
Dec 11 09:14:29 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 11 09:14:29 compute-0 ceph-mon[74426]: pgmap v50: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Dec 11 09:14:29 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Dec 11 09:14:29 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Dec 11 09:14:29 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e10 e10: 2 total, 2 up, 2 in
Dec 11 09:14:29 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e10 crush map has features 3314933000852226048, adjusting msgr requires
Dec 11 09:14:29 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Dec 11 09:14:29 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Dec 11 09:14:29 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Dec 11 09:14:29 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e10: 2 total, 2 up, 2 in
Dec 11 09:14:29 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0)
Dec 11 09:14:29 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Dec 11 09:14:29 compute-0 ceph-osd[82859]: osd.1 10 crush map has features 288514051259236352, adjusting msgr requires for clients
Dec 11 09:14:29 compute-0 ceph-osd[82859]: osd.1 10 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Dec 11 09:14:29 compute-0 ceph-osd[82859]: osd.1 10 crush map has features 3314933000852226048, adjusting msgr requires for osds
Dec 11 09:14:30 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v52: 1 pgs: 1 unknown; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Dec 11 09:14:30 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Dec 11 09:14:30 compute-0 ceph-mon[74426]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 11 09:14:30 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Dec 11 09:14:30 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e11 e11: 2 total, 2 up, 2 in
Dec 11 09:14:30 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e11: 2 total, 2 up, 2 in
Dec 11 09:14:30 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Dec 11 09:14:30 compute-0 ceph-mon[74426]: osdmap e10: 2 total, 2 up, 2 in
Dec 11 09:14:30 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Dec 11 09:14:31 compute-0 ceph-mgr[74715]: [devicehealth INFO root] creating main.db for devicehealth
Dec 11 09:14:31 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Dec 11 09:14:31 compute-0 ceph-mon[74426]: pgmap v52: 1 pgs: 1 unknown; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Dec 11 09:14:31 compute-0 ceph-mon[74426]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 11 09:14:31 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Dec 11 09:14:31 compute-0 ceph-mon[74426]: osdmap e11: 2 total, 2 up, 2 in
Dec 11 09:14:31 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e12 e12: 2 total, 2 up, 2 in
Dec 11 09:14:31 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e12: 2 total, 2 up, 2 in
Dec 11 09:14:32 compute-0 ceph-mgr[74715]: [devicehealth INFO root] Check health
Dec 11 09:14:32 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Dec 11 09:14:32 compute-0 sudo[84990]:     ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda
Dec 11 09:14:32 compute-0 sudo[84990]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 11 09:14:32 compute-0 sudo[84990]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167)
Dec 11 09:14:32 compute-0 sudo[84990]: pam_unix(sudo:session): session closed for user root
Dec 11 09:14:32 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Dec 11 09:14:32 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec 11 09:14:32 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 11 09:14:32 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:14:32 compute-0 ceph-mon[74426]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec 11 09:14:32 compute-0 ceph-mon[74426]: osdmap e12: 2 total, 2 up, 2 in
Dec 11 09:14:32 compute-0 ceph-mon[74426]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Dec 11 09:14:32 compute-0 ceph-mon[74426]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Dec 11 09:14:32 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 11 09:14:33 compute-0 ceph-mon[74426]: pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:14:33 compute-0 ceph-mon[74426]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec 11 09:14:33 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e12 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:14:34 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.wwpcae(active, since 103s)
Dec 11 09:14:34 compute-0 sudo[85016]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckhqulhlvhobdqjbzlteejtaubmirycs ; /usr/bin/python3'
Dec 11 09:14:34 compute-0 sudo[85016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:14:34 compute-0 python3[85018]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:14:34 compute-0 podman[85020]: 2025-12-11 09:14:34.45290354 +0000 UTC m=+0.031691208 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:14:34 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:14:34 compute-0 podman[85020]: 2025-12-11 09:14:34.876746018 +0000 UTC m=+0.455533666 container create 4417f9864b753745a731ddeccec2d73c17db0fdfc07dcd32bcf30c9729f9ac46 (image=quay.io/ceph/ceph:v19, name=hardcore_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec 11 09:14:35 compute-0 ceph-mon[74426]: mgrmap e9: compute-0.wwpcae(active, since 103s)
Dec 11 09:14:35 compute-0 systemd[1]: Started libpod-conmon-4417f9864b753745a731ddeccec2d73c17db0fdfc07dcd32bcf30c9729f9ac46.scope.
Dec 11 09:14:35 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:14:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d3d599fa8830dd162790afd586d59857d2cfac2cc8e470dc1fa3f70c78d4d18/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d3d599fa8830dd162790afd586d59857d2cfac2cc8e470dc1fa3f70c78d4d18/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d3d599fa8830dd162790afd586d59857d2cfac2cc8e470dc1fa3f70c78d4d18/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:35 compute-0 podman[85020]: 2025-12-11 09:14:35.259966631 +0000 UTC m=+0.838754289 container init 4417f9864b753745a731ddeccec2d73c17db0fdfc07dcd32bcf30c9729f9ac46 (image=quay.io/ceph/ceph:v19, name=hardcore_knuth, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid)
Dec 11 09:14:35 compute-0 podman[85020]: 2025-12-11 09:14:35.270839163 +0000 UTC m=+0.849626811 container start 4417f9864b753745a731ddeccec2d73c17db0fdfc07dcd32bcf30c9729f9ac46 (image=quay.io/ceph/ceph:v19, name=hardcore_knuth, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid)
Dec 11 09:14:35 compute-0 podman[85020]: 2025-12-11 09:14:35.27426557 +0000 UTC m=+0.853053228 container attach 4417f9864b753745a731ddeccec2d73c17db0fdfc07dcd32bcf30c9729f9ac46 (image=quay.io/ceph/ceph:v19, name=hardcore_knuth, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:14:35 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec 11 09:14:35 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2752830764' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 11 09:14:35 compute-0 hardcore_knuth[85036]: 
Dec 11 09:14:35 compute-0 hardcore_knuth[85036]: {"fsid":"31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060","health":{"status":"HEALTH_WARN","checks":{"CEPHADM_APPLY_SPEC_FAIL":{"severity":"HEALTH_WARN","summary":{"message":"Failed to apply 2 service(s): mon,mgr","count":2},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":127,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":12,"num_osds":2,"num_up_osds":2,"osd_up_since":1765444468,"num_in_osds":2,"osd_in_since":1765444445,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":1}],"num_pgs":1,"num_pools":1,"num_objects":2,"data_bytes":459280,"bytes_used":55508992,"bytes_avail":42885775360,"bytes_total":42941284352},"fsmap":{"epoch":1,"btime":"2025-12-11T09:12:26:191373+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-12-11T09:13:53.054285+0000","services":{}},"progress_events":{}}
Dec 11 09:14:35 compute-0 systemd[1]: libpod-4417f9864b753745a731ddeccec2d73c17db0fdfc07dcd32bcf30c9729f9ac46.scope: Deactivated successfully.
Dec 11 09:14:35 compute-0 podman[85020]: 2025-12-11 09:14:35.959983395 +0000 UTC m=+1.538771053 container died 4417f9864b753745a731ddeccec2d73c17db0fdfc07dcd32bcf30c9729f9ac46 (image=quay.io/ceph/ceph:v19, name=hardcore_knuth, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:14:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d3d599fa8830dd162790afd586d59857d2cfac2cc8e470dc1fa3f70c78d4d18-merged.mount: Deactivated successfully.
Dec 11 09:14:36 compute-0 ceph-mon[74426]: pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:14:36 compute-0 ceph-mon[74426]: from='client.? 192.168.122.100:0/2752830764' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 11 09:14:36 compute-0 podman[85020]: 2025-12-11 09:14:36.273104562 +0000 UTC m=+1.851892210 container remove 4417f9864b753745a731ddeccec2d73c17db0fdfc07dcd32bcf30c9729f9ac46 (image=quay.io/ceph/ceph:v19, name=hardcore_knuth, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325)
Dec 11 09:14:36 compute-0 systemd[1]: libpod-conmon-4417f9864b753745a731ddeccec2d73c17db0fdfc07dcd32bcf30c9729f9ac46.scope: Deactivated successfully.
Dec 11 09:14:36 compute-0 sudo[85016]: pam_unix(sudo:session): session closed for user root
Dec 11 09:14:36 compute-0 sudo[85098]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxzlcubjwnjlbizgcipojkbgmkyopfoi ; /usr/bin/python3'
Dec 11 09:14:36 compute-0 sudo[85098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:14:36 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:14:36 compute-0 python3[85100]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:14:37 compute-0 podman[85101]: 2025-12-11 09:14:37.007043663 +0000 UTC m=+0.124248968 container create 43c001beeea9f183d89477a409141d6291901874f7043c1f032af28de2367a37 (image=quay.io/ceph/ceph:v19, name=epic_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:14:37 compute-0 podman[85101]: 2025-12-11 09:14:36.9147087 +0000 UTC m=+0.031914015 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:14:37 compute-0 systemd[1]: Started libpod-conmon-43c001beeea9f183d89477a409141d6291901874f7043c1f032af28de2367a37.scope.
Dec 11 09:14:37 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:14:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcf1de83a036e9e20f746ec8a3fe85e8fef09f6cf152d23f33088034d8819ac7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcf1de83a036e9e20f746ec8a3fe85e8fef09f6cf152d23f33088034d8819ac7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:37 compute-0 podman[85101]: 2025-12-11 09:14:37.11380966 +0000 UTC m=+0.231014965 container init 43c001beeea9f183d89477a409141d6291901874f7043c1f032af28de2367a37 (image=quay.io/ceph/ceph:v19, name=epic_ride, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec 11 09:14:37 compute-0 podman[85101]: 2025-12-11 09:14:37.121294956 +0000 UTC m=+0.238500261 container start 43c001beeea9f183d89477a409141d6291901874f7043c1f032af28de2367a37 (image=quay.io/ceph/ceph:v19, name=epic_ride, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 11 09:14:37 compute-0 podman[85101]: 2025-12-11 09:14:37.125820028 +0000 UTC m=+0.243025353 container attach 43c001beeea9f183d89477a409141d6291901874f7043c1f032af28de2367a37 (image=quay.io/ceph/ceph:v19, name=epic_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 11 09:14:37 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec 11 09:14:37 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/161022377' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 11 09:14:38 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Dec 11 09:14:38 compute-0 ceph-mon[74426]: pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:14:38 compute-0 ceph-mon[74426]: from='client.? 192.168.122.100:0/161022377' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 11 09:14:38 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/161022377' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 11 09:14:38 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e13 e13: 2 total, 2 up, 2 in
Dec 11 09:14:38 compute-0 epic_ride[85116]: pool 'vms' created
Dec 11 09:14:38 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e13: 2 total, 2 up, 2 in
Dec 11 09:14:38 compute-0 systemd[1]: libpod-43c001beeea9f183d89477a409141d6291901874f7043c1f032af28de2367a37.scope: Deactivated successfully.
Dec 11 09:14:38 compute-0 podman[85101]: 2025-12-11 09:14:38.298256699 +0000 UTC m=+1.415462004 container died 43c001beeea9f183d89477a409141d6291901874f7043c1f032af28de2367a37 (image=quay.io/ceph/ceph:v19, name=epic_ride, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:14:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-bcf1de83a036e9e20f746ec8a3fe85e8fef09f6cf152d23f33088034d8819ac7-merged.mount: Deactivated successfully.
Dec 11 09:14:38 compute-0 podman[85101]: 2025-12-11 09:14:38.452069427 +0000 UTC m=+1.569274732 container remove 43c001beeea9f183d89477a409141d6291901874f7043c1f032af28de2367a37 (image=quay.io/ceph/ceph:v19, name=epic_ride, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec 11 09:14:38 compute-0 systemd[1]: libpod-conmon-43c001beeea9f183d89477a409141d6291901874f7043c1f032af28de2367a37.scope: Deactivated successfully.
Dec 11 09:14:38 compute-0 sudo[85098]: pam_unix(sudo:session): session closed for user root
Dec 11 09:14:38 compute-0 sudo[85179]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggjzddoaumqzkvfzfcvffgbzhprezeew ; /usr/bin/python3'
Dec 11 09:14:38 compute-0 sudo[85179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:14:38 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v59: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:14:38 compute-0 python3[85181]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:14:38 compute-0 podman[85182]: 2025-12-11 09:14:38.799124801 +0000 UTC m=+0.024269844 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:14:38 compute-0 podman[85182]: 2025-12-11 09:14:38.953105513 +0000 UTC m=+0.178250546 container create 1521788ddf6bfdc73e3f75949d2fc5baa94c513791186635451577f0ebb6397a (image=quay.io/ceph/ceph:v19, name=vigilant_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:14:38 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e13 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:14:38 compute-0 systemd[1]: Started libpod-conmon-1521788ddf6bfdc73e3f75949d2fc5baa94c513791186635451577f0ebb6397a.scope.
Dec 11 09:14:39 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:14:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11bcf458f55dd83df390e9c8dcf5572563387d0fe2046afd2903d6cca93d646e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11bcf458f55dd83df390e9c8dcf5572563387d0fe2046afd2903d6cca93d646e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:39 compute-0 podman[85182]: 2025-12-11 09:14:39.078993573 +0000 UTC m=+0.304138646 container init 1521788ddf6bfdc73e3f75949d2fc5baa94c513791186635451577f0ebb6397a (image=quay.io/ceph/ceph:v19, name=vigilant_greider, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2)
Dec 11 09:14:39 compute-0 podman[85182]: 2025-12-11 09:14:39.086038745 +0000 UTC m=+0.311183788 container start 1521788ddf6bfdc73e3f75949d2fc5baa94c513791186635451577f0ebb6397a (image=quay.io/ceph/ceph:v19, name=vigilant_greider, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid)
Dec 11 09:14:39 compute-0 podman[85182]: 2025-12-11 09:14:39.110407301 +0000 UTC m=+0.335552374 container attach 1521788ddf6bfdc73e3f75949d2fc5baa94c513791186635451577f0ebb6397a (image=quay.io/ceph/ceph:v19, name=vigilant_greider, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:14:39 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Dec 11 09:14:39 compute-0 ceph-mon[74426]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 11 09:14:39 compute-0 ceph-mon[74426]: from='client.? 192.168.122.100:0/161022377' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 11 09:14:39 compute-0 ceph-mon[74426]: osdmap e13: 2 total, 2 up, 2 in
Dec 11 09:14:39 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e14 e14: 2 total, 2 up, 2 in
Dec 11 09:14:39 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e14: 2 total, 2 up, 2 in
Dec 11 09:14:39 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec 11 09:14:39 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2379359262' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 11 09:14:40 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Dec 11 09:14:40 compute-0 ceph-mon[74426]: pgmap v59: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:14:40 compute-0 ceph-mon[74426]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 11 09:14:40 compute-0 ceph-mon[74426]: osdmap e14: 2 total, 2 up, 2 in
Dec 11 09:14:40 compute-0 ceph-mon[74426]: from='client.? 192.168.122.100:0/2379359262' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 11 09:14:40 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2379359262' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 11 09:14:40 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e15 e15: 2 total, 2 up, 2 in
Dec 11 09:14:40 compute-0 vigilant_greider[85197]: pool 'volumes' created
Dec 11 09:14:40 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e15: 2 total, 2 up, 2 in
Dec 11 09:14:40 compute-0 systemd[1]: libpod-1521788ddf6bfdc73e3f75949d2fc5baa94c513791186635451577f0ebb6397a.scope: Deactivated successfully.
Dec 11 09:14:40 compute-0 podman[85182]: 2025-12-11 09:14:40.546546826 +0000 UTC m=+1.771691869 container died 1521788ddf6bfdc73e3f75949d2fc5baa94c513791186635451577f0ebb6397a (image=quay.io/ceph/ceph:v19, name=vigilant_greider, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 11 09:14:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-11bcf458f55dd83df390e9c8dcf5572563387d0fe2046afd2903d6cca93d646e-merged.mount: Deactivated successfully.
Dec 11 09:14:40 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v62: 3 pgs: 2 unknown, 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:14:40 compute-0 podman[85182]: 2025-12-11 09:14:40.783078294 +0000 UTC m=+2.008223337 container remove 1521788ddf6bfdc73e3f75949d2fc5baa94c513791186635451577f0ebb6397a (image=quay.io/ceph/ceph:v19, name=vigilant_greider, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:14:40 compute-0 sudo[85179]: pam_unix(sudo:session): session closed for user root
Dec 11 09:14:40 compute-0 systemd[1]: libpod-conmon-1521788ddf6bfdc73e3f75949d2fc5baa94c513791186635451577f0ebb6397a.scope: Deactivated successfully.
Dec 11 09:14:40 compute-0 sudo[85260]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwifsdjrjugsyebvurexpvopvtfgwjkv ; /usr/bin/python3'
Dec 11 09:14:40 compute-0 sudo[85260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:14:41 compute-0 python3[85262]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:14:41 compute-0 podman[85263]: 2025-12-11 09:14:41.119842924 +0000 UTC m=+0.045956885 container create f869adcbd0b4c506f30937636aa9d56aeb45bcd196eef4e2d98336bc0b1747b7 (image=quay.io/ceph/ceph:v19, name=nifty_hofstadter, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 11 09:14:41 compute-0 systemd[1]: Started libpod-conmon-f869adcbd0b4c506f30937636aa9d56aeb45bcd196eef4e2d98336bc0b1747b7.scope.
Dec 11 09:14:41 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 15 pg[3.0( empty local-lis/les=0/0 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=15) [1] r=0 lpr=15 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:41 compute-0 podman[85263]: 2025-12-11 09:14:41.099573157 +0000 UTC m=+0.025687148 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:14:41 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:14:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31191c2d83fc03534e9427bd9e56538337e64e36be1944d2d972c5c0b16a40df/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31191c2d83fc03534e9427bd9e56538337e64e36be1944d2d972c5c0b16a40df/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:41 compute-0 podman[85263]: 2025-12-11 09:14:41.248101718 +0000 UTC m=+0.174215699 container init f869adcbd0b4c506f30937636aa9d56aeb45bcd196eef4e2d98336bc0b1747b7 (image=quay.io/ceph/ceph:v19, name=nifty_hofstadter, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:14:41 compute-0 podman[85263]: 2025-12-11 09:14:41.254186488 +0000 UTC m=+0.180300449 container start f869adcbd0b4c506f30937636aa9d56aeb45bcd196eef4e2d98336bc0b1747b7 (image=quay.io/ceph/ceph:v19, name=nifty_hofstadter, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:14:41 compute-0 podman[85263]: 2025-12-11 09:14:41.359444019 +0000 UTC m=+0.285557980 container attach f869adcbd0b4c506f30937636aa9d56aeb45bcd196eef4e2d98336bc0b1747b7 (image=quay.io/ceph/ceph:v19, name=nifty_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Dec 11 09:14:41 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Dec 11 09:14:41 compute-0 ceph-mon[74426]: from='client.? 192.168.122.100:0/2379359262' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 11 09:14:41 compute-0 ceph-mon[74426]: osdmap e15: 2 total, 2 up, 2 in
Dec 11 09:14:41 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec 11 09:14:41 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1863808059' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 11 09:14:41 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e16 e16: 2 total, 2 up, 2 in
Dec 11 09:14:41 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 16 pg[3.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=15) [1] r=0 lpr=15 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:41 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e16: 2 total, 2 up, 2 in
Dec 11 09:14:42 compute-0 ceph-mon[74426]: pgmap v62: 3 pgs: 2 unknown, 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:14:42 compute-0 ceph-mon[74426]: from='client.? 192.168.122.100:0/1863808059' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 11 09:14:42 compute-0 ceph-mon[74426]: osdmap e16: 2 total, 2 up, 2 in
Dec 11 09:14:42 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Dec 11 09:14:42 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1863808059' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 11 09:14:42 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e17 e17: 2 total, 2 up, 2 in
Dec 11 09:14:42 compute-0 nifty_hofstadter[85278]: pool 'backups' created
Dec 11 09:14:42 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e17: 2 total, 2 up, 2 in
Dec 11 09:14:42 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 17 pg[4.0( empty local-lis/les=0/0 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [1] r=0 lpr=17 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:42 compute-0 systemd[1]: libpod-f869adcbd0b4c506f30937636aa9d56aeb45bcd196eef4e2d98336bc0b1747b7.scope: Deactivated successfully.
Dec 11 09:14:42 compute-0 podman[85263]: 2025-12-11 09:14:42.67944296 +0000 UTC m=+1.605556951 container died f869adcbd0b4c506f30937636aa9d56aeb45bcd196eef4e2d98336bc0b1747b7 (image=quay.io/ceph/ceph:v19, name=nifty_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:14:42 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v65: 4 pgs: 2 active+clean, 2 unknown; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:14:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-31191c2d83fc03534e9427bd9e56538337e64e36be1944d2d972c5c0b16a40df-merged.mount: Deactivated successfully.
Dec 11 09:14:42 compute-0 podman[85263]: 2025-12-11 09:14:42.805649776 +0000 UTC m=+1.731763737 container remove f869adcbd0b4c506f30937636aa9d56aeb45bcd196eef4e2d98336bc0b1747b7 (image=quay.io/ceph/ceph:v19, name=nifty_hofstadter, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:14:42 compute-0 systemd[1]: libpod-conmon-f869adcbd0b4c506f30937636aa9d56aeb45bcd196eef4e2d98336bc0b1747b7.scope: Deactivated successfully.
Dec 11 09:14:42 compute-0 sudo[85260]: pam_unix(sudo:session): session closed for user root
Dec 11 09:14:42 compute-0 sudo[85341]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-deiochspeqdyiuzroaulcohawjkxmhxw ; /usr/bin/python3'
Dec 11 09:14:42 compute-0 sudo[85341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:14:43 compute-0 python3[85343]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:14:43 compute-0 podman[85344]: 2025-12-11 09:14:43.175812159 +0000 UTC m=+0.045335521 container create c38fefdcd382b1e317a017eb4e5f35aa2a98445d3295a15fee18c5021d440795 (image=quay.io/ceph/ceph:v19, name=pensive_benz, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:14:43 compute-0 podman[85344]: 2025-12-11 09:14:43.154146787 +0000 UTC m=+0.023670169 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:14:43 compute-0 systemd[1]: Started libpod-conmon-c38fefdcd382b1e317a017eb4e5f35aa2a98445d3295a15fee18c5021d440795.scope.
Dec 11 09:14:43 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:14:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5df143b008700c5e0e6fd54a4932df666c999522180836fe5a65be2c31d9b4ed/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5df143b008700c5e0e6fd54a4932df666c999522180836fe5a65be2c31d9b4ed/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:43 compute-0 podman[85344]: 2025-12-11 09:14:43.434675382 +0000 UTC m=+0.304198774 container init c38fefdcd382b1e317a017eb4e5f35aa2a98445d3295a15fee18c5021d440795 (image=quay.io/ceph/ceph:v19, name=pensive_benz, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 11 09:14:43 compute-0 podman[85344]: 2025-12-11 09:14:43.442949577 +0000 UTC m=+0.312472939 container start c38fefdcd382b1e317a017eb4e5f35aa2a98445d3295a15fee18c5021d440795 (image=quay.io/ceph/ceph:v19, name=pensive_benz, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 11 09:14:43 compute-0 podman[85344]: 2025-12-11 09:14:43.472805811 +0000 UTC m=+0.342329213 container attach c38fefdcd382b1e317a017eb4e5f35aa2a98445d3295a15fee18c5021d440795 (image=quay.io/ceph/ceph:v19, name=pensive_benz, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec 11 09:14:43 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Dec 11 09:14:43 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e18 e18: 2 total, 2 up, 2 in
Dec 11 09:14:43 compute-0 ceph-mon[74426]: from='client.? 192.168.122.100:0/1863808059' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 11 09:14:43 compute-0 ceph-mon[74426]: osdmap e17: 2 total, 2 up, 2 in
Dec 11 09:14:43 compute-0 ceph-mon[74426]: pgmap v65: 4 pgs: 2 active+clean, 2 unknown; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:14:43 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e18: 2 total, 2 up, 2 in
Dec 11 09:14:43 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 18 pg[4.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [1] r=0 lpr=17 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:43 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec 11 09:14:43 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/321715169' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 11 09:14:43 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e18 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:14:44 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Dec 11 09:14:44 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/321715169' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 11 09:14:44 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e19 e19: 2 total, 2 up, 2 in
Dec 11 09:14:44 compute-0 pensive_benz[85360]: pool 'images' created
Dec 11 09:14:44 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e19: 2 total, 2 up, 2 in
Dec 11 09:14:44 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v68: 5 pgs: 1 unknown, 4 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:14:44 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 19 pg[5.0( empty local-lis/les=0/0 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [1] r=0 lpr=19 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:44 compute-0 ceph-mon[74426]: osdmap e18: 2 total, 2 up, 2 in
Dec 11 09:14:44 compute-0 ceph-mon[74426]: from='client.? 192.168.122.100:0/321715169' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 11 09:14:44 compute-0 systemd[1]: libpod-c38fefdcd382b1e317a017eb4e5f35aa2a98445d3295a15fee18c5021d440795.scope: Deactivated successfully.
Dec 11 09:14:44 compute-0 podman[85344]: 2025-12-11 09:14:44.776695297 +0000 UTC m=+1.646218659 container died c38fefdcd382b1e317a017eb4e5f35aa2a98445d3295a15fee18c5021d440795 (image=quay.io/ceph/ceph:v19, name=pensive_benz, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 11 09:14:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-5df143b008700c5e0e6fd54a4932df666c999522180836fe5a65be2c31d9b4ed-merged.mount: Deactivated successfully.
Dec 11 09:14:44 compute-0 podman[85344]: 2025-12-11 09:14:44.818825185 +0000 UTC m=+1.688348547 container remove c38fefdcd382b1e317a017eb4e5f35aa2a98445d3295a15fee18c5021d440795 (image=quay.io/ceph/ceph:v19, name=pensive_benz, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Dec 11 09:14:44 compute-0 sudo[85341]: pam_unix(sudo:session): session closed for user root
Dec 11 09:14:44 compute-0 systemd[1]: libpod-conmon-c38fefdcd382b1e317a017eb4e5f35aa2a98445d3295a15fee18c5021d440795.scope: Deactivated successfully.
Dec 11 09:14:45 compute-0 sudo[85423]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzxvmhbhwafbagewlvpyhpepeyegxlwo ; /usr/bin/python3'
Dec 11 09:14:45 compute-0 sudo[85423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:14:45 compute-0 python3[85425]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:14:45 compute-0 podman[85426]: 2025-12-11 09:14:45.278097618 +0000 UTC m=+0.044203045 container create ee4a59d577ac1c5e45a82128463f9c8bcc8a8a860ada25abb4c679c7f6a1f2d8 (image=quay.io/ceph/ceph:v19, name=zen_murdock, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:14:45 compute-0 systemd[1]: Started libpod-conmon-ee4a59d577ac1c5e45a82128463f9c8bcc8a8a860ada25abb4c679c7f6a1f2d8.scope.
Dec 11 09:14:45 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:14:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56989ad48f0362463ca9965d09fcb0a240753854e4a503cad947c861b7aabda5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56989ad48f0362463ca9965d09fcb0a240753854e4a503cad947c861b7aabda5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:45 compute-0 podman[85426]: 2025-12-11 09:14:45.256903391 +0000 UTC m=+0.023008838 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:14:45 compute-0 podman[85426]: 2025-12-11 09:14:45.358589303 +0000 UTC m=+0.124694750 container init ee4a59d577ac1c5e45a82128463f9c8bcc8a8a860ada25abb4c679c7f6a1f2d8 (image=quay.io/ceph/ceph:v19, name=zen_murdock, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:14:45 compute-0 podman[85426]: 2025-12-11 09:14:45.364448871 +0000 UTC m=+0.130554298 container start ee4a59d577ac1c5e45a82128463f9c8bcc8a8a860ada25abb4c679c7f6a1f2d8 (image=quay.io/ceph/ceph:v19, name=zen_murdock, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 11 09:14:45 compute-0 podman[85426]: 2025-12-11 09:14:45.367928342 +0000 UTC m=+0.134033769 container attach ee4a59d577ac1c5e45a82128463f9c8bcc8a8a860ada25abb4c679c7f6a1f2d8 (image=quay.io/ceph/ceph:v19, name=zen_murdock, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:14:45 compute-0 ceph-mon[74426]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 11 09:14:45 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec 11 09:14:45 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4077184896' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 11 09:14:45 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Dec 11 09:14:45 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4077184896' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 11 09:14:45 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e20 e20: 2 total, 2 up, 2 in
Dec 11 09:14:45 compute-0 zen_murdock[85441]: pool 'cephfs.cephfs.meta' created
Dec 11 09:14:45 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e20: 2 total, 2 up, 2 in
Dec 11 09:14:45 compute-0 ceph-mon[74426]: from='client.? 192.168.122.100:0/321715169' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 11 09:14:45 compute-0 ceph-mon[74426]: osdmap e19: 2 total, 2 up, 2 in
Dec 11 09:14:45 compute-0 ceph-mon[74426]: pgmap v68: 5 pgs: 1 unknown, 4 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:14:45 compute-0 ceph-mon[74426]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 11 09:14:45 compute-0 ceph-mon[74426]: from='client.? 192.168.122.100:0/4077184896' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 11 09:14:45 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 20 pg[6.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [1] r=0 lpr=20 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:45 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 20 pg[5.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [1] r=0 lpr=19 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:45 compute-0 systemd[1]: libpod-ee4a59d577ac1c5e45a82128463f9c8bcc8a8a860ada25abb4c679c7f6a1f2d8.scope: Deactivated successfully.
Dec 11 09:14:45 compute-0 podman[85426]: 2025-12-11 09:14:45.798985023 +0000 UTC m=+0.565090450 container died ee4a59d577ac1c5e45a82128463f9c8bcc8a8a860ada25abb4c679c7f6a1f2d8 (image=quay.io/ceph/ceph:v19, name=zen_murdock, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 11 09:14:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-56989ad48f0362463ca9965d09fcb0a240753854e4a503cad947c861b7aabda5-merged.mount: Deactivated successfully.
Dec 11 09:14:45 compute-0 podman[85426]: 2025-12-11 09:14:45.878517278 +0000 UTC m=+0.644622705 container remove ee4a59d577ac1c5e45a82128463f9c8bcc8a8a860ada25abb4c679c7f6a1f2d8 (image=quay.io/ceph/ceph:v19, name=zen_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec 11 09:14:45 compute-0 systemd[1]: libpod-conmon-ee4a59d577ac1c5e45a82128463f9c8bcc8a8a860ada25abb4c679c7f6a1f2d8.scope: Deactivated successfully.
Dec 11 09:14:45 compute-0 sudo[85423]: pam_unix(sudo:session): session closed for user root
Dec 11 09:14:46 compute-0 sudo[85503]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsvjefqesfiqumbmdjhsrigfxhovntdt ; /usr/bin/python3'
Dec 11 09:14:46 compute-0 sudo[85503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:14:46 compute-0 python3[85505]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:14:46 compute-0 podman[85506]: 2025-12-11 09:14:46.25927222 +0000 UTC m=+0.043998329 container create e91dd550a7d1aa1d759977f19b66bdea32bb87057dd65a1523fa64e3105e3dee (image=quay.io/ceph/ceph:v19, name=nervous_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:14:46 compute-0 systemd[1]: Started libpod-conmon-e91dd550a7d1aa1d759977f19b66bdea32bb87057dd65a1523fa64e3105e3dee.scope.
Dec 11 09:14:46 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:14:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a429d4dfecea25104ed651fe87af12f1912bbabd20069ba5df2a2391d97ec39a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a429d4dfecea25104ed651fe87af12f1912bbabd20069ba5df2a2391d97ec39a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:46 compute-0 podman[85506]: 2025-12-11 09:14:46.239370823 +0000 UTC m=+0.024096942 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:14:46 compute-0 podman[85506]: 2025-12-11 09:14:46.335268822 +0000 UTC m=+0.119994931 container init e91dd550a7d1aa1d759977f19b66bdea32bb87057dd65a1523fa64e3105e3dee (image=quay.io/ceph/ceph:v19, name=nervous_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec 11 09:14:46 compute-0 podman[85506]: 2025-12-11 09:14:46.34117018 +0000 UTC m=+0.125896289 container start e91dd550a7d1aa1d759977f19b66bdea32bb87057dd65a1523fa64e3105e3dee (image=quay.io/ceph/ceph:v19, name=nervous_mcnulty, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 11 09:14:46 compute-0 podman[85506]: 2025-12-11 09:14:46.344569358 +0000 UTC m=+0.129295467 container attach e91dd550a7d1aa1d759977f19b66bdea32bb87057dd65a1523fa64e3105e3dee (image=quay.io/ceph/ceph:v19, name=nervous_mcnulty, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec 11 09:14:46 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec 11 09:14:46 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/949722423' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 11 09:14:46 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v70: 6 pgs: 2 unknown, 4 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:14:46 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Dec 11 09:14:46 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/949722423' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 11 09:14:46 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e21 e21: 2 total, 2 up, 2 in
Dec 11 09:14:46 compute-0 nervous_mcnulty[85521]: pool 'cephfs.cephfs.data' created
Dec 11 09:14:46 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e21: 2 total, 2 up, 2 in
Dec 11 09:14:46 compute-0 ceph-mon[74426]: from='client.? 192.168.122.100:0/4077184896' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 11 09:14:46 compute-0 ceph-mon[74426]: osdmap e20: 2 total, 2 up, 2 in
Dec 11 09:14:46 compute-0 ceph-mon[74426]: from='client.? 192.168.122.100:0/949722423' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 11 09:14:46 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 21 pg[6.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [1] r=0 lpr=20 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:46 compute-0 systemd[1]: libpod-e91dd550a7d1aa1d759977f19b66bdea32bb87057dd65a1523fa64e3105e3dee.scope: Deactivated successfully.
Dec 11 09:14:46 compute-0 podman[85548]: 2025-12-11 09:14:46.851117014 +0000 UTC m=+0.029387531 container died e91dd550a7d1aa1d759977f19b66bdea32bb87057dd65a1523fa64e3105e3dee (image=quay.io/ceph/ceph:v19, name=nervous_mcnulty, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:14:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-a429d4dfecea25104ed651fe87af12f1912bbabd20069ba5df2a2391d97ec39a-merged.mount: Deactivated successfully.
Dec 11 09:14:46 compute-0 podman[85548]: 2025-12-11 09:14:46.892149058 +0000 UTC m=+0.070419544 container remove e91dd550a7d1aa1d759977f19b66bdea32bb87057dd65a1523fa64e3105e3dee (image=quay.io/ceph/ceph:v19, name=nervous_mcnulty, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec 11 09:14:46 compute-0 systemd[1]: libpod-conmon-e91dd550a7d1aa1d759977f19b66bdea32bb87057dd65a1523fa64e3105e3dee.scope: Deactivated successfully.
Dec 11 09:14:46 compute-0 sudo[85503]: pam_unix(sudo:session): session closed for user root
Dec 11 09:14:47 compute-0 sudo[85586]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndbgfhxttpexdmguorrzkjssfgjyrujh ; /usr/bin/python3'
Dec 11 09:14:47 compute-0 sudo[85586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:14:47 compute-0 python3[85588]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:14:47 compute-0 podman[85589]: 2025-12-11 09:14:47.320475042 +0000 UTC m=+0.053484613 container create ece7e277dd0696f4c08819260d290befc71e42bf31022a7d83a1af9b7ac9bdb8 (image=quay.io/ceph/ceph:v19, name=tender_fermat, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 11 09:14:47 compute-0 systemd[1]: Started libpod-conmon-ece7e277dd0696f4c08819260d290befc71e42bf31022a7d83a1af9b7ac9bdb8.scope.
Dec 11 09:14:47 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:14:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b6d2f1d261d683b358a1cb5c6c4d43c6107aed8f7855e83c69db8123835b301/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b6d2f1d261d683b358a1cb5c6c4d43c6107aed8f7855e83c69db8123835b301/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:47 compute-0 podman[85589]: 2025-12-11 09:14:47.299668045 +0000 UTC m=+0.032677626 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:14:47 compute-0 podman[85589]: 2025-12-11 09:14:47.407706642 +0000 UTC m=+0.140716253 container init ece7e277dd0696f4c08819260d290befc71e42bf31022a7d83a1af9b7ac9bdb8 (image=quay.io/ceph/ceph:v19, name=tender_fermat, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec 11 09:14:47 compute-0 podman[85589]: 2025-12-11 09:14:47.414182309 +0000 UTC m=+0.147191870 container start ece7e277dd0696f4c08819260d290befc71e42bf31022a7d83a1af9b7ac9bdb8 (image=quay.io/ceph/ceph:v19, name=tender_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 11 09:14:47 compute-0 podman[85589]: 2025-12-11 09:14:47.41796448 +0000 UTC m=+0.150974041 container attach ece7e277dd0696f4c08819260d290befc71e42bf31022a7d83a1af9b7ac9bdb8 (image=quay.io/ceph/ceph:v19, name=tender_fermat, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec 11 09:14:47 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Dec 11 09:14:47 compute-0 ceph-mon[74426]: pgmap v70: 6 pgs: 2 unknown, 4 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:14:47 compute-0 ceph-mon[74426]: from='client.? 192.168.122.100:0/949722423' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 11 09:14:47 compute-0 ceph-mon[74426]: osdmap e21: 2 total, 2 up, 2 in
Dec 11 09:14:47 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e22 e22: 2 total, 2 up, 2 in
Dec 11 09:14:47 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e22: 2 total, 2 up, 2 in
Dec 11 09:14:47 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0)
Dec 11 09:14:47 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/777583152' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Dec 11 09:14:48 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v73: 7 pgs: 1 creating+peering, 2 unknown, 4 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:14:48 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Dec 11 09:14:48 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/777583152' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Dec 11 09:14:48 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e23 e23: 2 total, 2 up, 2 in
Dec 11 09:14:48 compute-0 tender_fermat[85604]: enabled application 'rbd' on pool 'vms'
Dec 11 09:14:48 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e23: 2 total, 2 up, 2 in
Dec 11 09:14:48 compute-0 ceph-mon[74426]: osdmap e22: 2 total, 2 up, 2 in
Dec 11 09:14:48 compute-0 ceph-mon[74426]: from='client.? 192.168.122.100:0/777583152' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Dec 11 09:14:48 compute-0 systemd[1]: libpod-ece7e277dd0696f4c08819260d290befc71e42bf31022a7d83a1af9b7ac9bdb8.scope: Deactivated successfully.
Dec 11 09:14:48 compute-0 podman[85589]: 2025-12-11 09:14:48.855283494 +0000 UTC m=+1.588293055 container died ece7e277dd0696f4c08819260d290befc71e42bf31022a7d83a1af9b7ac9bdb8 (image=quay.io/ceph/ceph:v19, name=tender_fermat, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:14:48 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e23 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:14:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b6d2f1d261d683b358a1cb5c6c4d43c6107aed8f7855e83c69db8123835b301-merged.mount: Deactivated successfully.
Dec 11 09:14:49 compute-0 podman[85589]: 2025-12-11 09:14:49.054427326 +0000 UTC m=+1.787436887 container remove ece7e277dd0696f4c08819260d290befc71e42bf31022a7d83a1af9b7ac9bdb8 (image=quay.io/ceph/ceph:v19, name=tender_fermat, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:14:49 compute-0 systemd[1]: libpod-conmon-ece7e277dd0696f4c08819260d290befc71e42bf31022a7d83a1af9b7ac9bdb8.scope: Deactivated successfully.
Dec 11 09:14:49 compute-0 sudo[85586]: pam_unix(sudo:session): session closed for user root
Dec 11 09:14:49 compute-0 sudo[85666]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uikbjwoamyxjjwqtzsfrkphdwzgvorai ; /usr/bin/python3'
Dec 11 09:14:49 compute-0 sudo[85666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:14:49 compute-0 python3[85668]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:14:49 compute-0 podman[85669]: 2025-12-11 09:14:49.457432499 +0000 UTC m=+0.106966583 container create 77ceecc473482e2b490f61bd5c5d0020411a5dd63035f220a76fde0dc58d40e6 (image=quay.io/ceph/ceph:v19, name=zealous_moser, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 11 09:14:49 compute-0 podman[85669]: 2025-12-11 09:14:49.373824834 +0000 UTC m=+0.023358948 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:14:49 compute-0 systemd[1]: Started libpod-conmon-77ceecc473482e2b490f61bd5c5d0020411a5dd63035f220a76fde0dc58d40e6.scope.
Dec 11 09:14:49 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:14:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a9c95b3f37c6af4b11a7e7114f1551d66800f65c9035012f995e296188f44c9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a9c95b3f37c6af4b11a7e7114f1551d66800f65c9035012f995e296188f44c9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:49 compute-0 podman[85669]: 2025-12-11 09:14:49.60249021 +0000 UTC m=+0.252024324 container init 77ceecc473482e2b490f61bd5c5d0020411a5dd63035f220a76fde0dc58d40e6 (image=quay.io/ceph/ceph:v19, name=zealous_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:14:49 compute-0 podman[85669]: 2025-12-11 09:14:49.609163534 +0000 UTC m=+0.258697618 container start 77ceecc473482e2b490f61bd5c5d0020411a5dd63035f220a76fde0dc58d40e6 (image=quay.io/ceph/ceph:v19, name=zealous_moser, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Dec 11 09:14:49 compute-0 podman[85669]: 2025-12-11 09:14:49.613188663 +0000 UTC m=+0.262722777 container attach 77ceecc473482e2b490f61bd5c5d0020411a5dd63035f220a76fde0dc58d40e6 (image=quay.io/ceph/ceph:v19, name=zealous_moser, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec 11 09:14:49 compute-0 ceph-mon[74426]: pgmap v73: 7 pgs: 1 creating+peering, 2 unknown, 4 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:14:49 compute-0 ceph-mon[74426]: from='client.? 192.168.122.100:0/777583152' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Dec 11 09:14:49 compute-0 ceph-mon[74426]: osdmap e23: 2 total, 2 up, 2 in
Dec 11 09:14:49 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0)
Dec 11 09:14:49 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4123948204' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Dec 11 09:14:50 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v75: 7 pgs: 1 creating+peering, 6 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:14:50 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Dec 11 09:14:50 compute-0 ceph-mon[74426]: log_channel(cluster) log [WRN] : Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 11 09:14:51 compute-0 ceph-mon[74426]: from='client.? 192.168.122.100:0/4123948204' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Dec 11 09:14:51 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4123948204' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Dec 11 09:14:51 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e24 e24: 2 total, 2 up, 2 in
Dec 11 09:14:51 compute-0 zealous_moser[85684]: enabled application 'rbd' on pool 'volumes'
Dec 11 09:14:51 compute-0 systemd[1]: libpod-77ceecc473482e2b490f61bd5c5d0020411a5dd63035f220a76fde0dc58d40e6.scope: Deactivated successfully.
Dec 11 09:14:51 compute-0 podman[85669]: 2025-12-11 09:14:51.034830446 +0000 UTC m=+1.684364530 container died 77ceecc473482e2b490f61bd5c5d0020411a5dd63035f220a76fde0dc58d40e6 (image=quay.io/ceph/ceph:v19, name=zealous_moser, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325)
Dec 11 09:14:51 compute-0 ceph-mgr[74715]: [balancer INFO root] Optimize plan auto_2025-12-11_09:14:51
Dec 11 09:14:51 compute-0 ceph-mgr[74715]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 11 09:14:51 compute-0 ceph-mgr[74715]: [balancer INFO root] Some PGs (0.142857) are inactive; try again later
Dec 11 09:14:51 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e24: 2 total, 2 up, 2 in
Dec 11 09:14:51 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] _maybe_adjust
Dec 11 09:14:51 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Dec 11 09:14:51 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 11 09:14:51 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Dec 11 09:14:51 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 11 09:14:51 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Dec 11 09:14:51 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 11 09:14:51 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Dec 11 09:14:51 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 11 09:14:51 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Dec 11 09:14:51 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 11 09:14:51 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Dec 11 09:14:51 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 11 09:14:51 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Dec 11 09:14:51 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 11 09:14:51 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0)
Dec 11 09:14:51 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Dec 11 09:14:51 compute-0 ceph-mgr[74715]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 11 09:14:51 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 11 09:14:51 compute-0 ceph-mgr[74715]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 11 09:14:51 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 11 09:14:51 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] scanning for idle connections..
Dec 11 09:14:51 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] cleaning up connections: []
Dec 11 09:14:51 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] scanning for idle connections..
Dec 11 09:14:51 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] cleaning up connections: []
Dec 11 09:14:51 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] scanning for idle connections..
Dec 11 09:14:51 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] cleaning up connections: []
Dec 11 09:14:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a9c95b3f37c6af4b11a7e7114f1551d66800f65c9035012f995e296188f44c9-merged.mount: Deactivated successfully.
Dec 11 09:14:51 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 11 09:14:51 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 11 09:14:51 compute-0 podman[85669]: 2025-12-11 09:14:51.408947184 +0000 UTC m=+2.058481278 container remove 77ceecc473482e2b490f61bd5c5d0020411a5dd63035f220a76fde0dc58d40e6 (image=quay.io/ceph/ceph:v19, name=zealous_moser, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec 11 09:14:51 compute-0 systemd[1]: libpod-conmon-77ceecc473482e2b490f61bd5c5d0020411a5dd63035f220a76fde0dc58d40e6.scope: Deactivated successfully.
Dec 11 09:14:51 compute-0 sudo[85666]: pam_unix(sudo:session): session closed for user root
Dec 11 09:14:51 compute-0 sudo[85743]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwnslscnoxzzuhmqamokdomvvmqurptc ; /usr/bin/python3'
Dec 11 09:14:51 compute-0 sudo[85743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:14:51 compute-0 python3[85745]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:14:51 compute-0 podman[85746]: 2025-12-11 09:14:51.761430691 +0000 UTC m=+0.044851195 container create f685908e0845c233ef7fda00211fe43cc3367587a5d95d114ebc14529ef38343 (image=quay.io/ceph/ceph:v19, name=sad_engelbart, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 11 09:14:51 compute-0 systemd[1]: Started libpod-conmon-f685908e0845c233ef7fda00211fe43cc3367587a5d95d114ebc14529ef38343.scope.
Dec 11 09:14:51 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:14:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23c9fa6a799a7d07f527f8b9ca43599785baf4d2d2f3ebbf5dc960ccf612ed7f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23c9fa6a799a7d07f527f8b9ca43599785baf4d2d2f3ebbf5dc960ccf612ed7f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:51 compute-0 podman[85746]: 2025-12-11 09:14:51.833162937 +0000 UTC m=+0.116583481 container init f685908e0845c233ef7fda00211fe43cc3367587a5d95d114ebc14529ef38343 (image=quay.io/ceph/ceph:v19, name=sad_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec 11 09:14:51 compute-0 podman[85746]: 2025-12-11 09:14:51.744932344 +0000 UTC m=+0.028352868 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:14:51 compute-0 podman[85746]: 2025-12-11 09:14:51.840896424 +0000 UTC m=+0.124316928 container start f685908e0845c233ef7fda00211fe43cc3367587a5d95d114ebc14529ef38343 (image=quay.io/ceph/ceph:v19, name=sad_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 11 09:14:51 compute-0 podman[85746]: 2025-12-11 09:14:51.844836491 +0000 UTC m=+0.128256995 container attach f685908e0845c233ef7fda00211fe43cc3367587a5d95d114ebc14529ef38343 (image=quay.io/ceph/ceph:v19, name=sad_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 11 09:14:52 compute-0 ceph-mon[74426]: pgmap v75: 7 pgs: 1 creating+peering, 6 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:14:52 compute-0 ceph-mon[74426]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 11 09:14:52 compute-0 ceph-mon[74426]: from='client.? 192.168.122.100:0/4123948204' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Dec 11 09:14:52 compute-0 ceph-mon[74426]: osdmap e24: 2 total, 2 up, 2 in
Dec 11 09:14:52 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Dec 11 09:14:52 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Dec 11 09:14:52 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Dec 11 09:14:52 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e25 e25: 2 total, 2 up, 2 in
Dec 11 09:14:52 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e25: 2 total, 2 up, 2 in
Dec 11 09:14:52 compute-0 ceph-mgr[74715]: [progress INFO root] update: starting ev a1f2958f-9ca3-4105-94ba-adaed98a6fef (PG autoscaler increasing pool 2 PGs from 1 to 32)
Dec 11 09:14:52 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0)
Dec 11 09:14:52 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Dec 11 09:14:52 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0)
Dec 11 09:14:52 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1917506725' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Dec 11 09:14:52 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v78: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:14:52 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0)
Dec 11 09:14:52 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 11 09:14:53 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Dec 11 09:14:53 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Dec 11 09:14:53 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1917506725' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Dec 11 09:14:53 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Dec 11 09:14:53 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e26 e26: 2 total, 2 up, 2 in
Dec 11 09:14:53 compute-0 sad_engelbart[85762]: enabled application 'rbd' on pool 'backups'
Dec 11 09:14:53 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e26: 2 total, 2 up, 2 in
Dec 11 09:14:53 compute-0 ceph-mgr[74715]: [progress INFO root] update: starting ev bec620ca-26b5-483e-9fec-2794c9a9fc78 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Dec 11 09:14:53 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0)
Dec 11 09:14:53 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Dec 11 09:14:53 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Dec 11 09:14:53 compute-0 ceph-mon[74426]: osdmap e25: 2 total, 2 up, 2 in
Dec 11 09:14:53 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Dec 11 09:14:53 compute-0 ceph-mon[74426]: from='client.? 192.168.122.100:0/1917506725' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Dec 11 09:14:53 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 11 09:14:53 compute-0 systemd[1]: libpod-f685908e0845c233ef7fda00211fe43cc3367587a5d95d114ebc14529ef38343.scope: Deactivated successfully.
Dec 11 09:14:53 compute-0 podman[85746]: 2025-12-11 09:14:53.127224388 +0000 UTC m=+1.410644912 container died f685908e0845c233ef7fda00211fe43cc3367587a5d95d114ebc14529ef38343 (image=quay.io/ceph/ceph:v19, name=sad_engelbart, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default)
Dec 11 09:14:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-23c9fa6a799a7d07f527f8b9ca43599785baf4d2d2f3ebbf5dc960ccf612ed7f-merged.mount: Deactivated successfully.
Dec 11 09:14:53 compute-0 podman[85746]: 2025-12-11 09:14:53.271466063 +0000 UTC m=+1.554886567 container remove f685908e0845c233ef7fda00211fe43cc3367587a5d95d114ebc14529ef38343 (image=quay.io/ceph/ceph:v19, name=sad_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec 11 09:14:53 compute-0 sudo[85743]: pam_unix(sudo:session): session closed for user root
Dec 11 09:14:53 compute-0 systemd[1]: libpod-conmon-f685908e0845c233ef7fda00211fe43cc3367587a5d95d114ebc14529ef38343.scope: Deactivated successfully.
Dec 11 09:14:53 compute-0 sudo[85822]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wznwiimonjbjmhgogmyurfzzbhwsdtwu ; /usr/bin/python3'
Dec 11 09:14:53 compute-0 sudo[85822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:14:53 compute-0 python3[85824]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:14:53 compute-0 podman[85825]: 2025-12-11 09:14:53.611024967 +0000 UTC m=+0.038989228 container create 86af9a973dea59a0cc38326919d3f00cda48d03e84f6c2079a12c34567b579d4 (image=quay.io/ceph/ceph:v19, name=quizzical_babbage, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec 11 09:14:53 compute-0 systemd[1]: Started libpod-conmon-86af9a973dea59a0cc38326919d3f00cda48d03e84f6c2079a12c34567b579d4.scope.
Dec 11 09:14:53 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:14:53 compute-0 podman[85825]: 2025-12-11 09:14:53.595161179 +0000 UTC m=+0.023125470 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:14:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43689fb9794276ba1b68b7747c3f85a2e34ea8bad980ae57affc59c3b2d05e1b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43689fb9794276ba1b68b7747c3f85a2e34ea8bad980ae57affc59c3b2d05e1b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:53 compute-0 podman[85825]: 2025-12-11 09:14:53.70363049 +0000 UTC m=+0.131594761 container init 86af9a973dea59a0cc38326919d3f00cda48d03e84f6c2079a12c34567b579d4 (image=quay.io/ceph/ceph:v19, name=quizzical_babbage, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec 11 09:14:53 compute-0 podman[85825]: 2025-12-11 09:14:53.709519348 +0000 UTC m=+0.137483599 container start 86af9a973dea59a0cc38326919d3f00cda48d03e84f6c2079a12c34567b579d4 (image=quay.io/ceph/ceph:v19, name=quizzical_babbage, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec 11 09:14:53 compute-0 podman[85825]: 2025-12-11 09:14:53.713267148 +0000 UTC m=+0.141231399 container attach 86af9a973dea59a0cc38326919d3f00cda48d03e84f6c2079a12c34567b579d4 (image=quay.io/ceph/ceph:v19, name=quizzical_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:14:53 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 11 09:14:53 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:53 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 11 09:14:53 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:53 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 11 09:14:53 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:53 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 11 09:14:53 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:53 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Dec 11 09:14:53 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 11 09:14:53 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 11 09:14:53 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:14:53 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 11 09:14:53 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 11 09:14:53 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Dec 11 09:14:53 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Dec 11 09:14:53 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e26 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:14:54 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0)
Dec 11 09:14:54 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/235673189' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Dec 11 09:14:54 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Dec 11 09:14:54 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Dec 11 09:14:54 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/235673189' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Dec 11 09:14:54 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e27 e27: 2 total, 2 up, 2 in
Dec 11 09:14:54 compute-0 quizzical_babbage[85841]: enabled application 'rbd' on pool 'images'
Dec 11 09:14:54 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e27: 2 total, 2 up, 2 in
Dec 11 09:14:54 compute-0 ceph-mgr[74715]: [progress INFO root] update: starting ev be46c2f1-ffd5-4147-906a-052593362e84 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Dec 11 09:14:54 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0)
Dec 11 09:14:54 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Dec 11 09:14:54 compute-0 ceph-mon[74426]: pgmap v78: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:14:54 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Dec 11 09:14:54 compute-0 ceph-mon[74426]: from='client.? 192.168.122.100:0/1917506725' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Dec 11 09:14:54 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Dec 11 09:14:54 compute-0 ceph-mon[74426]: osdmap e26: 2 total, 2 up, 2 in
Dec 11 09:14:54 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Dec 11 09:14:54 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:54 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:54 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:54 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:54 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 11 09:14:54 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:14:54 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 11 09:14:54 compute-0 ceph-mon[74426]: from='client.? 192.168.122.100:0/235673189' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Dec 11 09:14:54 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Dec 11 09:14:54 compute-0 ceph-mon[74426]: from='client.? 192.168.122.100:0/235673189' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Dec 11 09:14:54 compute-0 ceph-mon[74426]: osdmap e27: 2 total, 2 up, 2 in
Dec 11 09:14:54 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Dec 11 09:14:54 compute-0 systemd[1]: libpod-86af9a973dea59a0cc38326919d3f00cda48d03e84f6c2079a12c34567b579d4.scope: Deactivated successfully.
Dec 11 09:14:54 compute-0 podman[85825]: 2025-12-11 09:14:54.130857998 +0000 UTC m=+0.558822249 container died 86af9a973dea59a0cc38326919d3f00cda48d03e84f6c2079a12c34567b579d4 (image=quay.io/ceph/ceph:v19, name=quizzical_babbage, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:14:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-43689fb9794276ba1b68b7747c3f85a2e34ea8bad980ae57affc59c3b2d05e1b-merged.mount: Deactivated successfully.
Dec 11 09:14:54 compute-0 podman[85825]: 2025-12-11 09:14:54.217401387 +0000 UTC m=+0.645365648 container remove 86af9a973dea59a0cc38326919d3f00cda48d03e84f6c2079a12c34567b579d4 (image=quay.io/ceph/ceph:v19, name=quizzical_babbage, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:14:54 compute-0 systemd[1]: libpod-conmon-86af9a973dea59a0cc38326919d3f00cda48d03e84f6c2079a12c34567b579d4.scope: Deactivated successfully.
Dec 11 09:14:54 compute-0 sudo[85822]: pam_unix(sudo:session): session closed for user root
Dec 11 09:14:54 compute-0 sudo[85901]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryghjtfdianyqydtegtafszsuutnpkpv ; /usr/bin/python3'
Dec 11 09:14:54 compute-0 sudo[85901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:14:54 compute-0 python3[85903]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:14:54 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf
Dec 11 09:14:54 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf
Dec 11 09:14:54 compute-0 podman[85904]: 2025-12-11 09:14:54.554638557 +0000 UTC m=+0.047482430 container create c603aa01e9513c98427286c258c0246474e662e47171b4ea503bb805c9cd394e (image=quay.io/ceph/ceph:v19, name=goofy_hypatia, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:14:54 compute-0 systemd[1]: Started libpod-conmon-c603aa01e9513c98427286c258c0246474e662e47171b4ea503bb805c9cd394e.scope.
Dec 11 09:14:54 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:14:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/951a2bb0424797d4673c120fbf9bfc3f1083df26f002c105208fd27ce40172d6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/951a2bb0424797d4673c120fbf9bfc3f1083df26f002c105208fd27ce40172d6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:54 compute-0 podman[85904]: 2025-12-11 09:14:54.619216603 +0000 UTC m=+0.112060456 container init c603aa01e9513c98427286c258c0246474e662e47171b4ea503bb805c9cd394e (image=quay.io/ceph/ceph:v19, name=goofy_hypatia, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec 11 09:14:54 compute-0 podman[85904]: 2025-12-11 09:14:54.625568806 +0000 UTC m=+0.118412659 container start c603aa01e9513c98427286c258c0246474e662e47171b4ea503bb805c9cd394e (image=quay.io/ceph/ceph:v19, name=goofy_hypatia, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec 11 09:14:54 compute-0 podman[85904]: 2025-12-11 09:14:54.629680688 +0000 UTC m=+0.122524541 container attach c603aa01e9513c98427286c258c0246474e662e47171b4ea503bb805c9cd394e (image=quay.io/ceph/ceph:v19, name=goofy_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 11 09:14:54 compute-0 podman[85904]: 2025-12-11 09:14:54.534599536 +0000 UTC m=+0.027443409 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:14:54 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v81: 38 pgs: 31 unknown, 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:14:54 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0)
Dec 11 09:14:54 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 11 09:14:54 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0)
Dec 11 09:14:54 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 11 09:14:54 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0)
Dec 11 09:14:54 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2933185597' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Dec 11 09:14:55 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec 11 09:14:55 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec 11 09:14:55 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Dec 11 09:14:55 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Dec 11 09:14:55 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Dec 11 09:14:55 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Dec 11 09:14:55 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2933185597' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Dec 11 09:14:55 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e28 e28: 2 total, 2 up, 2 in
Dec 11 09:14:55 compute-0 goofy_hypatia[85919]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Dec 11 09:14:55 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e28: 2 total, 2 up, 2 in
Dec 11 09:14:55 compute-0 ceph-mgr[74715]: [progress INFO root] update: starting ev ed052ef4-f3ac-4f07-a284-c6bc9142712f (PG autoscaler increasing pool 5 PGs from 1 to 32)
Dec 11 09:14:55 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"} v 0)
Dec 11 09:14:55 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Dec 11 09:14:55 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 28 pg[3.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=28 pruub=10.554845810s) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active pruub 46.803936005s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:14:55 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 28 pg[4.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=28 pruub=12.570239067s) [1] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 active pruub 48.819339752s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:14:55 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 28 pg[4.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=28 pruub=12.570239067s) [1] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 unknown pruub 48.819339752s@ mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:55 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 28 pg[3.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=28 pruub=10.554845810s) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown pruub 46.803936005s@ mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:55 compute-0 systemd[1]: libpod-c603aa01e9513c98427286c258c0246474e662e47171b4ea503bb805c9cd394e.scope: Deactivated successfully.
Dec 11 09:14:55 compute-0 podman[85904]: 2025-12-11 09:14:55.143874668 +0000 UTC m=+0.636718521 container died c603aa01e9513c98427286c258c0246474e662e47171b4ea503bb805c9cd394e (image=quay.io/ceph/ceph:v19, name=goofy_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 11 09:14:55 compute-0 ceph-mon[74426]: Updating compute-2:/etc/ceph/ceph.conf
Dec 11 09:14:55 compute-0 ceph-mon[74426]: Updating compute-2:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf
Dec 11 09:14:55 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 11 09:14:55 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 11 09:14:55 compute-0 ceph-mon[74426]: from='client.? 192.168.122.100:0/2933185597' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Dec 11 09:14:55 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Dec 11 09:14:55 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Dec 11 09:14:55 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Dec 11 09:14:55 compute-0 ceph-mon[74426]: from='client.? 192.168.122.100:0/2933185597' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Dec 11 09:14:55 compute-0 ceph-mon[74426]: osdmap e28: 2 total, 2 up, 2 in
Dec 11 09:14:55 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Dec 11 09:14:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-951a2bb0424797d4673c120fbf9bfc3f1083df26f002c105208fd27ce40172d6-merged.mount: Deactivated successfully.
Dec 11 09:14:55 compute-0 podman[85904]: 2025-12-11 09:14:55.180666725 +0000 UTC m=+0.673510578 container remove c603aa01e9513c98427286c258c0246474e662e47171b4ea503bb805c9cd394e (image=quay.io/ceph/ceph:v19, name=goofy_hypatia, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 11 09:14:55 compute-0 systemd[1]: libpod-conmon-c603aa01e9513c98427286c258c0246474e662e47171b4ea503bb805c9cd394e.scope: Deactivated successfully.
Dec 11 09:14:55 compute-0 sudo[85901]: pam_unix(sudo:session): session closed for user root
Dec 11 09:14:55 compute-0 sudo[85978]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbzofhccnrwqozzqgzxvrhsnxkxhgkrt ; /usr/bin/python3'
Dec 11 09:14:55 compute-0 sudo[85978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:14:55 compute-0 python3[85980]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:14:55 compute-0 podman[85981]: 2025-12-11 09:14:55.52275273 +0000 UTC m=+0.049137634 container create a09dac8373aa34389af5c4a9a741b050ea8f46ce7b9d76e5dbfd9184645b9bda (image=quay.io/ceph/ceph:v19, name=pensive_brahmagupta, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:14:55 compute-0 systemd[1]: Started libpod-conmon-a09dac8373aa34389af5c4a9a741b050ea8f46ce7b9d76e5dbfd9184645b9bda.scope.
Dec 11 09:14:55 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:14:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/658010eee5e085a5867cabeccb3934f4f16aaaf7965980801fb33f8f47d04e19/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/658010eee5e085a5867cabeccb3934f4f16aaaf7965980801fb33f8f47d04e19/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:55 compute-0 podman[85981]: 2025-12-11 09:14:55.501995346 +0000 UTC m=+0.028380270 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:14:55 compute-0 podman[85981]: 2025-12-11 09:14:55.599696291 +0000 UTC m=+0.126081215 container init a09dac8373aa34389af5c4a9a741b050ea8f46ce7b9d76e5dbfd9184645b9bda (image=quay.io/ceph/ceph:v19, name=pensive_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:14:55 compute-0 podman[85981]: 2025-12-11 09:14:55.60558535 +0000 UTC m=+0.131970254 container start a09dac8373aa34389af5c4a9a741b050ea8f46ce7b9d76e5dbfd9184645b9bda (image=quay.io/ceph/ceph:v19, name=pensive_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:14:55 compute-0 podman[85981]: 2025-12-11 09:14:55.609238647 +0000 UTC m=+0.135623581 container attach a09dac8373aa34389af5c4a9a741b050ea8f46ce7b9d76e5dbfd9184645b9bda (image=quay.io/ceph/ceph:v19, name=pensive_brahmagupta, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:14:55 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring
Dec 11 09:14:55 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring
Dec 11 09:14:55 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0)
Dec 11 09:14:55 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2924337962' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Dec 11 09:14:56 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Dec 11 09:14:56 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Dec 11 09:14:56 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2924337962' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Dec 11 09:14:56 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e29 e29: 2 total, 2 up, 2 in
Dec 11 09:14:56 compute-0 pensive_brahmagupta[85996]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Dec 11 09:14:56 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e29: 2 total, 2 up, 2 in
Dec 11 09:14:56 compute-0 ceph-mgr[74715]: [progress INFO root] update: starting ev 921ed38d-67e2-474c-be2b-63de7ab28791 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Dec 11 09:14:56 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0)
Dec 11 09:14:56 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Dec 11 09:14:56 compute-0 ceph-mgr[74715]: [progress WARNING root] Starting Global Recovery Event,93 pgs not in active + clean state
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[4.1f( empty local-lis/les=17/18 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [1] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[4.1e( empty local-lis/les=17/18 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [1] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[3.18( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[3.17( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[4.10( empty local-lis/les=17/18 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [1] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[3.16( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[4.11( empty local-lis/les=17/18 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [1] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[3.19( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[3.15( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[4.12( empty local-lis/les=17/18 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [1] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[3.14( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[4.14( empty local-lis/les=17/18 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [1] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[3.13( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[4.13( empty local-lis/les=17/18 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [1] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[3.12( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[4.15( empty local-lis/les=17/18 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [1] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[3.11( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[4.16( empty local-lis/les=17/18 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [1] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[3.10( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[4.17( empty local-lis/les=17/18 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [1] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[4.8( empty local-lis/les=17/18 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [1] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[3.f( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[3.e( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[4.9( empty local-lis/les=17/18 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [1] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[3.d( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[4.a( empty local-lis/les=17/18 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [1] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[3.c( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[4.b( empty local-lis/les=17/18 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [1] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[3.b( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[4.c( empty local-lis/les=17/18 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [1] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[4.d( empty local-lis/les=17/18 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [1] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[4.7( empty local-lis/les=17/18 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [1] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[3.a( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[3.7( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[3.6( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[4.1( empty local-lis/les=17/18 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [1] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[3.5( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[4.2( empty local-lis/les=17/18 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [1] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[3.1( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[4.6( empty local-lis/les=17/18 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [1] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[3.2( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[4.5( empty local-lis/les=17/18 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [1] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[3.3( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[4.4( empty local-lis/les=17/18 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [1] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[3.4( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[4.3( empty local-lis/les=17/18 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [1] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[3.8( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[4.f( empty local-lis/les=17/18 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [1] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[4.e( empty local-lis/les=17/18 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [1] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[4.1d( empty local-lis/les=17/18 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [1] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[3.1a( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[4.1c( empty local-lis/les=17/18 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [1] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[3.1b( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[4.1b( empty local-lis/les=17/18 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [1] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[3.1c( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[4.1a( empty local-lis/les=17/18 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [1] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[3.1d( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[4.19( empty local-lis/les=17/18 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [1] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[3.1e( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[3.1f( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[4.18( empty local-lis/les=17/18 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [1] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[3.9( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[4.1e( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [1] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[4.10( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [1] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[3.17( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[4.1f( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [1] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[4.11( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [1] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[3.16( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[3.18( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[3.14( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[4.12( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [1] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[4.13( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [1] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[3.12( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[3.11( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[4.16( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [1] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[4.15( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [1] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[4.14( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [1] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[3.13( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[4.17( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [1] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[3.10( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[4.8( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [1] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[3.e( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[4.9( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [1] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[4.a( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [1] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[3.d( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[4.b( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [1] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[3.c( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[3.b( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[3.f( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[4.c( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [1] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[4.d( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [1] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[3.0( empty local-lis/les=28/29 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[4.0( empty local-lis/les=28/29 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [1] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[4.7( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [1] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[3.a( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[3.7( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[3.6( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[4.1( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [1] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[3.5( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[4.2( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [1] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[3.1( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[3.2( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[4.5( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [1] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[3.3( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[4.4( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [1] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[3.4( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[4.6( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [1] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[3.19( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[3.15( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[3.8( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[4.3( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [1] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[4.e( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [1] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[4.1d( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [1] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[3.1b( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[3.1a( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[4.1b( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [1] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[4.19( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [1] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[4.1a( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [1] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[3.1c( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[3.1e( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[3.1d( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[3.1f( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:56 compute-0 ceph-mon[74426]: pgmap v81: 38 pgs: 31 unknown, 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:14:56 compute-0 ceph-mon[74426]: 2.1f scrub starts
Dec 11 09:14:56 compute-0 ceph-mon[74426]: 2.1f scrub ok
Dec 11 09:14:56 compute-0 ceph-mon[74426]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec 11 09:14:56 compute-0 ceph-mon[74426]: Updating compute-2:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring
Dec 11 09:14:56 compute-0 ceph-mon[74426]: from='client.? 192.168.122.100:0/2924337962' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Dec 11 09:14:56 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Dec 11 09:14:56 compute-0 ceph-mon[74426]: from='client.? 192.168.122.100:0/2924337962' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Dec 11 09:14:56 compute-0 ceph-mon[74426]: osdmap e29: 2 total, 2 up, 2 in
Dec 11 09:14:56 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Dec 11 09:14:56 compute-0 systemd[1]: libpod-a09dac8373aa34389af5c4a9a741b050ea8f46ce7b9d76e5dbfd9184645b9bda.scope: Deactivated successfully.
Dec 11 09:14:56 compute-0 conmon[85996]: conmon a09dac8373aa34389af5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a09dac8373aa34389af5c4a9a741b050ea8f46ce7b9d76e5dbfd9184645b9bda.scope/container/memory.events
Dec 11 09:14:56 compute-0 podman[85981]: 2025-12-11 09:14:56.158778989 +0000 UTC m=+0.685163913 container died a09dac8373aa34389af5c4a9a741b050ea8f46ce7b9d76e5dbfd9184645b9bda (image=quay.io/ceph/ceph:v19, name=pensive_brahmagupta, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[3.9( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[4.f( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [1] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[4.18( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [1] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 29 pg[4.1c( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [1] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:56 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 11 09:14:56 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:56 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 11 09:14:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-658010eee5e085a5867cabeccb3934f4f16aaaf7965980801fb33f8f47d04e19-merged.mount: Deactivated successfully.
Dec 11 09:14:56 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:56 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 11 09:14:56 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:56 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v84: 100 pgs: 93 unknown, 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:14:56 compute-0 ceph-mgr[74715]: [progress INFO root] update: starting ev d0634e73-e62c-4742-8d51-871f74e59709 (Updating mon deployment (+2 -> 3))
Dec 11 09:14:56 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"} v 0)
Dec 11 09:14:56 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 11 09:14:56 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0)
Dec 11 09:14:56 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 11 09:14:56 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec 11 09:14:56 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 11 09:14:56 compute-0 podman[85981]: 2025-12-11 09:14:56.20195522 +0000 UTC m=+0.728340124 container remove a09dac8373aa34389af5c4a9a741b050ea8f46ce7b9d76e5dbfd9184645b9bda (image=quay.io/ceph/ceph:v19, name=pensive_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 11 09:14:56 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec 11 09:14:56 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 11 09:14:56 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 11 09:14:56 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:14:56 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-2 on compute-2
Dec 11 09:14:56 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-2 on compute-2
Dec 11 09:14:56 compute-0 systemd[1]: libpod-conmon-a09dac8373aa34389af5c4a9a741b050ea8f46ce7b9d76e5dbfd9184645b9bda.scope: Deactivated successfully.
Dec 11 09:14:56 compute-0 sudo[85978]: pam_unix(sudo:session): session closed for user root
Dec 11 09:14:57 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Dec 11 09:14:57 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Dec 11 09:14:57 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Dec 11 09:14:57 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Dec 11 09:14:57 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec 11 09:14:57 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Dec 11 09:14:57 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e30 e30: 2 total, 2 up, 2 in
Dec 11 09:14:57 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e30: 2 total, 2 up, 2 in
Dec 11 09:14:57 compute-0 ceph-mon[74426]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Dec 11 09:14:57 compute-0 ceph-mon[74426]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 2 pool(s) do not have an application enabled)
Dec 11 09:14:57 compute-0 ceph-mon[74426]: log_channel(cluster) log [INF] : Cluster is now healthy
Dec 11 09:14:57 compute-0 ceph-mgr[74715]: [progress INFO root] update: starting ev 1f9b4deb-6563-4395-9e89-b1bfd3d44034 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Dec 11 09:14:57 compute-0 ceph-mon[74426]: 2.1d scrub starts
Dec 11 09:14:57 compute-0 ceph-mon[74426]: 2.1d scrub ok
Dec 11 09:14:57 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:57 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:57 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:57 compute-0 ceph-mon[74426]: pgmap v84: 100 pgs: 93 unknown, 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:14:57 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 11 09:14:57 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 11 09:14:57 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 11 09:14:57 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 11 09:14:57 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:14:57 compute-0 ceph-mon[74426]: Deploying daemon mon.compute-2 on compute-2
Dec 11 09:14:57 compute-0 ceph-mgr[74715]: [progress INFO root] complete: finished ev a1f2958f-9ca3-4105-94ba-adaed98a6fef (PG autoscaler increasing pool 2 PGs from 1 to 32)
Dec 11 09:14:57 compute-0 ceph-mgr[74715]: [progress INFO root] Completed event a1f2958f-9ca3-4105-94ba-adaed98a6fef (PG autoscaler increasing pool 2 PGs from 1 to 32) in 5 seconds
Dec 11 09:14:57 compute-0 ceph-mgr[74715]: [progress INFO root] complete: finished ev bec620ca-26b5-483e-9fec-2794c9a9fc78 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Dec 11 09:14:57 compute-0 ceph-mgr[74715]: [progress INFO root] Completed event bec620ca-26b5-483e-9fec-2794c9a9fc78 (PG autoscaler increasing pool 3 PGs from 1 to 32) in 4 seconds
Dec 11 09:14:57 compute-0 ceph-mgr[74715]: [progress INFO root] complete: finished ev be46c2f1-ffd5-4147-906a-052593362e84 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Dec 11 09:14:57 compute-0 ceph-mgr[74715]: [progress INFO root] Completed event be46c2f1-ffd5-4147-906a-052593362e84 (PG autoscaler increasing pool 4 PGs from 1 to 32) in 3 seconds
Dec 11 09:14:57 compute-0 ceph-mgr[74715]: [progress INFO root] complete: finished ev ed052ef4-f3ac-4f07-a284-c6bc9142712f (PG autoscaler increasing pool 5 PGs from 1 to 32)
Dec 11 09:14:57 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 30 pg[5.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=30 pruub=12.588553429s) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 active pruub 50.913356781s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:14:57 compute-0 ceph-mgr[74715]: [progress INFO root] Completed event ed052ef4-f3ac-4f07-a284-c6bc9142712f (PG autoscaler increasing pool 5 PGs from 1 to 32) in 2 seconds
Dec 11 09:14:57 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 30 pg[6.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=30 pruub=13.597392082s) [1] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 active pruub 51.922214508s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:14:57 compute-0 ceph-mgr[74715]: [progress INFO root] complete: finished ev 921ed38d-67e2-474c-be2b-63de7ab28791 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Dec 11 09:14:57 compute-0 ceph-mgr[74715]: [progress INFO root] Completed event 921ed38d-67e2-474c-be2b-63de7ab28791 (PG autoscaler increasing pool 6 PGs from 1 to 32) in 1 seconds
Dec 11 09:14:57 compute-0 ceph-mgr[74715]: [progress INFO root] complete: finished ev 1f9b4deb-6563-4395-9e89-b1bfd3d44034 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Dec 11 09:14:57 compute-0 ceph-mgr[74715]: [progress INFO root] Completed event 1f9b4deb-6563-4395-9e89-b1bfd3d44034 (PG autoscaler increasing pool 7 PGs from 1 to 32) in 0 seconds
Dec 11 09:14:57 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 30 pg[6.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=30 pruub=13.597392082s) [1] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 unknown pruub 51.922214508s@ mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:57 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 30 pg[5.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=30 pruub=12.588553429s) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 unknown pruub 50.913356781s@ mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:57 compute-0 python3[86108]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 11 09:14:57 compute-0 python3[86179]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765444496.9378152-37121-85446728675974/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=ad866aa1f51f395809dd7ac5cb7a56d43c167b49 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:14:58 compute-0 sudo[86279]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zaqosgcitxznpeyslyzrdkxymkdcwjbt ; /usr/bin/python3'
Dec 11 09:14:58 compute-0 sudo[86279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:14:58 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Dec 11 09:14:58 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Dec 11 09:14:58 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Dec 11 09:14:58 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v86: 162 pgs: 124 unknown, 38 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:14:58 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0)
Dec 11 09:14:58 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 11 09:14:58 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e31 e31: 2 total, 2 up, 2 in
Dec 11 09:14:58 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e31: 2 total, 2 up, 2 in
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[5.19( empty local-lis/les=19/20 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[6.1a( empty local-lis/les=20/21 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [1] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[6.1b( empty local-lis/les=20/21 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [1] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[5.18( empty local-lis/les=19/20 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:58 compute-0 ceph-mon[74426]: 2.1e scrub starts
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[5.1b( empty local-lis/les=19/20 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:58 compute-0 ceph-mon[74426]: 2.1e scrub ok
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[6.18( empty local-lis/les=20/21 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [1] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:58 compute-0 ceph-mon[74426]: 4.1e scrub starts
Dec 11 09:14:58 compute-0 ceph-mon[74426]: 4.1e scrub ok
Dec 11 09:14:58 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Dec 11 09:14:58 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec 11 09:14:58 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Dec 11 09:14:58 compute-0 ceph-mon[74426]: osdmap e30: 2 total, 2 up, 2 in
Dec 11 09:14:58 compute-0 ceph-mon[74426]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Dec 11 09:14:58 compute-0 ceph-mon[74426]: Health check cleared: POOL_APP_NOT_ENABLED (was: 2 pool(s) do not have an application enabled)
Dec 11 09:14:58 compute-0 ceph-mon[74426]: Cluster is now healthy
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[5.1a( empty local-lis/les=19/20 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[6.19( empty local-lis/les=20/21 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [1] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[5.1d( empty local-lis/les=19/20 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[6.1e( empty local-lis/les=20/21 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [1] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[6.1f( empty local-lis/les=20/21 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [1] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[5.1c( empty local-lis/les=19/20 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[5.f( empty local-lis/les=19/20 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[6.c( empty local-lis/les=20/21 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [1] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[5.e( empty local-lis/les=19/20 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[5.2( empty local-lis/les=19/20 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[6.1( empty local-lis/les=20/21 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [1] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[6.d( empty local-lis/les=20/21 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [1] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[5.5( empty local-lis/les=19/20 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[5.4( empty local-lis/les=19/20 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[6.7( empty local-lis/les=20/21 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [1] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[5.7( empty local-lis/les=19/20 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[6.4( empty local-lis/les=20/21 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [1] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[5.3( empty local-lis/les=19/20 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[6.6( empty local-lis/les=20/21 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [1] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[5.1( empty local-lis/les=19/20 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[6.2( empty local-lis/les=20/21 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [1] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[5.6( empty local-lis/les=19/20 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[6.5( empty local-lis/les=20/21 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [1] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[5.c( empty local-lis/les=19/20 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[6.f( empty local-lis/les=20/21 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [1] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[5.d( empty local-lis/les=19/20 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[6.e( empty local-lis/les=20/21 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [1] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[5.a( empty local-lis/les=19/20 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[6.9( empty local-lis/les=20/21 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [1] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[6.8( empty local-lis/les=20/21 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [1] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[5.b( empty local-lis/les=19/20 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[5.8( empty local-lis/les=19/20 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[6.b( empty local-lis/les=20/21 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [1] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[6.a( empty local-lis/les=20/21 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [1] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[5.9( empty local-lis/les=19/20 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[6.15( empty local-lis/les=20/21 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [1] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[5.17( empty local-lis/les=19/20 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[5.16( empty local-lis/les=19/20 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[6.14( empty local-lis/les=20/21 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [1] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[5.14( empty local-lis/les=19/20 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[6.17( empty local-lis/les=20/21 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [1] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[5.15( empty local-lis/les=19/20 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[5.12( empty local-lis/les=19/20 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[6.16( empty local-lis/les=20/21 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [1] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[6.11( empty local-lis/les=20/21 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [1] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[5.13( empty local-lis/les=19/20 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[6.3( empty local-lis/les=20/21 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [1] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[6.10( empty local-lis/les=20/21 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [1] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[5.10( empty local-lis/les=19/20 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[6.13( empty local-lis/les=20/21 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [1] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[5.11( empty local-lis/les=19/20 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[6.12( empty local-lis/les=20/21 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [1] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[6.1d( empty local-lis/les=20/21 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [1] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[6.1c( empty local-lis/les=20/21 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [1] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[6.1a( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [1] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[5.1e( empty local-lis/les=19/20 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[5.1f( empty local-lis/les=19/20 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[5.19( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[6.18( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [1] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[6.1b( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [1] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[5.18( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[6.19( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [1] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[5.1d( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[5.1a( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[5.1b( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[6.1f( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [1] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[6.1e( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [1] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[6.c( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [1] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[5.1c( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[5.f( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[5.2( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[5.e( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[5.5( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[6.1( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [1] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[6.d( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [1] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[5.7( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[6.7( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [1] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[6.0( empty local-lis/les=30/31 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [1] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[6.4( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [1] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[5.3( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[5.0( empty local-lis/les=30/31 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[5.6( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[6.5( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [1] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[6.2( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [1] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[5.c( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[6.f( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [1] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[5.1( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[5.d( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[5.4( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[6.6( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [1] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[5.a( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[6.8( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [1] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[6.e( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [1] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[5.8( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[6.9( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [1] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[5.b( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[6.b( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [1] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[5.17( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[6.15( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [1] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[5.9( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[6.a( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [1] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[5.16( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[6.14( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [1] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[5.14( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[6.17( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [1] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[5.12( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[5.13( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[6.10( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [1] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[6.11( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [1] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[6.3( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [1] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[5.10( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[5.15( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[6.13( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [1] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[6.12( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [1] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[6.1c( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [1] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[6.1d( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [1] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[5.1e( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[5.1f( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[5.11( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=19/19 les/c/f=20/20/0 sis=30) [1] r=0 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 31 pg[6.16( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [1] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:58 compute-0 python3[86281]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 11 09:14:58 compute-0 sudo[86279]: pam_unix(sudo:session): session closed for user root
Dec 11 09:14:58 compute-0 sudo[86354]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epyivolrbedcvnsejsrfijrhcufqcmul ; /usr/bin/python3'
Dec 11 09:14:58 compute-0 sudo[86354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:14:58 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 11 09:14:58 compute-0 python3[86356]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765444497.9702806-37135-12386314430923/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=a42e444c7f54f5ad38a92f6cfc6bb89f619c1fac backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:14:58 compute-0 sudo[86354]: pam_unix(sudo:session): session closed for user root
Dec 11 09:14:58 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Dec 11 09:14:58 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Dec 11 09:14:58 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:58 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 11 09:14:58 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:58 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 11 09:14:58 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:14:58 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec 11 09:14:58 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 11 09:14:58 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec 11 09:14:58 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 11 09:14:58 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 11 09:14:58 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:14:58 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-1 on compute-1
Dec 11 09:14:58 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-1 on compute-1
Dec 11 09:14:58 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Dec 11 09:14:58 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).monmap v1 adding/updating compute-2 at [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to monitor cluster
Dec 11 09:14:58 compute-0 ceph-mgr[74715]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/1086568239; not ready for session (expect reconnect)
Dec 11 09:14:58 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec 11 09:14:58 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 11 09:14:58 compute-0 ceph-mgr[74715]: mgr finish mon failed to return metadata for mon.compute-2: (2) No such file or directory
Dec 11 09:14:58 compute-0 ceph-mon[74426]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0)
Dec 11 09:14:58 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 11 09:14:58 compute-0 ceph-mon[74426]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec 11 09:14:58 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 11 09:14:58 compute-0 ceph-mon[74426]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec 11 09:14:58 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 11 09:14:58 compute-0 ceph-mgr[74715]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec 11 09:14:58 compute-0 ceph-mon[74426]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Dec 11 09:14:58 compute-0 ceph-mon[74426]: paxos.0).electionLogic(5) init, last seen epoch 5, mid-election, bumping
Dec 11 09:14:58 compute-0 ceph-mon[74426]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 11 09:14:59 compute-0 sudo[86404]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-waevqdwmcpfxfbcujkwiypuuprbiychf ; /usr/bin/python3'
Dec 11 09:14:59 compute-0 sudo[86404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:14:59 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Dec 11 09:14:59 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Dec 11 09:14:59 compute-0 python3[86406]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:14:59 compute-0 podman[86407]: 2025-12-11 09:14:59.230211583 +0000 UTC m=+0.050043462 container create 1e874aaabb7c3cd0e5ab56e73f114f4eba2c517ee9437fcc9c236e71352744a8 (image=quay.io/ceph/ceph:v19, name=nice_sinoussi, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Dec 11 09:14:59 compute-0 systemd[75800]: Starting Mark boot as successful...
Dec 11 09:14:59 compute-0 systemd[1]: Started libpod-conmon-1e874aaabb7c3cd0e5ab56e73f114f4eba2c517ee9437fcc9c236e71352744a8.scope.
Dec 11 09:14:59 compute-0 systemd[75800]: Finished Mark boot as successful.
Dec 11 09:14:59 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:14:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b16f0d8a24d61c818909c73952ca2fe56c8cc615d3c64101911db6619c957b81/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b16f0d8a24d61c818909c73952ca2fe56c8cc615d3c64101911db6619c957b81/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b16f0d8a24d61c818909c73952ca2fe56c8cc615d3c64101911db6619c957b81/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:59 compute-0 podman[86407]: 2025-12-11 09:14:59.292056802 +0000 UTC m=+0.111888711 container init 1e874aaabb7c3cd0e5ab56e73f114f4eba2c517ee9437fcc9c236e71352744a8 (image=quay.io/ceph/ceph:v19, name=nice_sinoussi, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:14:59 compute-0 podman[86407]: 2025-12-11 09:14:59.30043824 +0000 UTC m=+0.120270119 container start 1e874aaabb7c3cd0e5ab56e73f114f4eba2c517ee9437fcc9c236e71352744a8 (image=quay.io/ceph/ceph:v19, name=nice_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:14:59 compute-0 podman[86407]: 2025-12-11 09:14:59.210370679 +0000 UTC m=+0.030202578 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:14:59 compute-0 podman[86407]: 2025-12-11 09:14:59.304578233 +0000 UTC m=+0.124410142 container attach 1e874aaabb7c3cd0e5ab56e73f114f4eba2c517ee9437fcc9c236e71352744a8 (image=quay.io/ceph/ceph:v19, name=nice_sinoussi, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:14:59 compute-0 ceph-mon[74426]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Dec 11 09:14:59 compute-0 ceph-mon[74426]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Dec 11 09:14:59 compute-0 ceph-mgr[74715]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/1086568239; not ready for session (expect reconnect)
Dec 11 09:14:59 compute-0 ceph-mon[74426]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec 11 09:14:59 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 11 09:14:59 compute-0 ceph-mgr[74715]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec 11 09:15:00 compute-0 ceph-mon[74426]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Dec 11 09:15:00 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Dec 11 09:15:00 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Dec 11 09:15:00 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v88: 162 pgs: 162 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:15:00 compute-0 ceph-mon[74426]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0)
Dec 11 09:15:00 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 11 09:15:00 compute-0 ceph-mon[74426]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 11 09:15:00 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 11 09:15:00 compute-0 ceph-mon[74426]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 11 09:15:00 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 11 09:15:00 compute-0 ceph-mon[74426]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 11 09:15:00 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 11 09:15:00 compute-0 ceph-mon[74426]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 11 09:15:00 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 11 09:15:00 compute-0 ceph-mon[74426]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 11 09:15:00 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 11 09:15:00 compute-0 ceph-mon[74426]: mon.compute-0@0(electing) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 11 09:15:00 compute-0 ceph-mon[74426]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Dec 11 09:15:00 compute-0 ceph-mon[74426]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Dec 11 09:15:00 compute-0 ceph-mon[74426]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Dec 11 09:15:00 compute-0 ceph-mgr[74715]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/4028123285; not ready for session (expect reconnect)
Dec 11 09:15:00 compute-0 ceph-mon[74426]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 11 09:15:00 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 11 09:15:00 compute-0 ceph-mgr[74715]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Dec 11 09:15:00 compute-0 ceph-mon[74426]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Dec 11 09:15:00 compute-0 ceph-mgr[74715]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/1086568239; not ready for session (expect reconnect)
Dec 11 09:15:00 compute-0 ceph-mon[74426]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec 11 09:15:00 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 11 09:15:00 compute-0 ceph-mgr[74715]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec 11 09:15:01 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 4.11 deep-scrub starts
Dec 11 09:15:01 compute-0 ceph-mgr[74715]: [progress INFO root] Writing back 8 completed events
Dec 11 09:15:01 compute-0 ceph-mon[74426]: mon.compute-0@0(electing) e2 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 11 09:15:01 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 4.11 deep-scrub ok
Dec 11 09:15:01 compute-0 ceph-mgr[74715]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/4028123285; not ready for session (expect reconnect)
Dec 11 09:15:01 compute-0 ceph-mon[74426]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 11 09:15:01 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 11 09:15:01 compute-0 ceph-mgr[74715]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Dec 11 09:15:01 compute-0 ceph-mgr[74715]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/1086568239; not ready for session (expect reconnect)
Dec 11 09:15:01 compute-0 ceph-mon[74426]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec 11 09:15:01 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 11 09:15:01 compute-0 ceph-mgr[74715]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec 11 09:15:02 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Dec 11 09:15:02 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Dec 11 09:15:02 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v89: 162 pgs: 162 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:15:02 compute-0 ceph-mon[74426]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0)
Dec 11 09:15:02 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 11 09:15:02 compute-0 ceph-mon[74426]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 11 09:15:02 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 11 09:15:02 compute-0 ceph-mon[74426]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 11 09:15:02 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 11 09:15:02 compute-0 ceph-mon[74426]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 11 09:15:02 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 11 09:15:02 compute-0 ceph-mon[74426]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 11 09:15:02 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 11 09:15:02 compute-0 ceph-mon[74426]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 11 09:15:02 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 11 09:15:02 compute-0 ceph-mon[74426]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Dec 11 09:15:02 compute-0 ceph-mon[74426]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Dec 11 09:15:02 compute-0 ceph-mon[74426]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Dec 11 09:15:02 compute-0 ceph-mgr[74715]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/4028123285; not ready for session (expect reconnect)
Dec 11 09:15:02 compute-0 ceph-mon[74426]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 11 09:15:02 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 11 09:15:02 compute-0 ceph-mgr[74715]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Dec 11 09:15:02 compute-0 ceph-mgr[74715]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/1086568239; not ready for session (expect reconnect)
Dec 11 09:15:02 compute-0 ceph-mon[74426]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec 11 09:15:02 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 11 09:15:02 compute-0 ceph-mgr[74715]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec 11 09:15:03 compute-0 ceph-mon[74426]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Dec 11 09:15:03 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Dec 11 09:15:03 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Dec 11 09:15:03 compute-0 ceph-mgr[74715]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/4028123285; not ready for session (expect reconnect)
Dec 11 09:15:03 compute-0 ceph-mon[74426]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 11 09:15:03 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 11 09:15:03 compute-0 ceph-mgr[74715]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Dec 11 09:15:03 compute-0 ceph-mon[74426]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Dec 11 09:15:03 compute-0 ceph-mgr[74715]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/1086568239; not ready for session (expect reconnect)
Dec 11 09:15:03 compute-0 ceph-mon[74426]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec 11 09:15:03 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 11 09:15:03 compute-0 ceph-mgr[74715]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec 11 09:15:03 compute-0 ceph-mon[74426]: paxos.0).electionLogic(7) init, last seen epoch 7, mid-election, bumping
Dec 11 09:15:03 compute-0 ceph-mon[74426]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 11 09:15:03 compute-0 ceph-mon[74426]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Dec 11 09:15:03 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : monmap epoch 2
Dec 11 09:15:03 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:15:03 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : last_changed 2025-12-11T09:14:58.875471+0000
Dec 11 09:15:03 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : created 2025-12-11T09:12:23.814502+0000
Dec 11 09:15:03 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Dec 11 09:15:03 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : election_strategy: 1
Dec 11 09:15:03 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec 11 09:15:03 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Dec 11 09:15:03 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 11 09:15:03 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : fsmap 
Dec 11 09:15:03 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e31: 2 total, 2 up, 2 in
Dec 11 09:15:03 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.wwpcae(active, since 2m)
Dec 11 09:15:03 compute-0 ceph-mon[74426]: log_channel(cluster) log [INF] : overall HEALTH_OK
Dec 11 09:15:03 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:03 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 11 09:15:03 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:03 compute-0 ceph-mgr[74715]: [progress INFO root] Completed event 5cb29b88-c216-43c6-9da9-4cc43c710c5f (Global Recovery Event) in 8 seconds
Dec 11 09:15:03 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:15:03 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:04 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 11 09:15:04 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:04 compute-0 ceph-mgr[74715]: [progress INFO root] complete: finished ev d0634e73-e62c-4742-8d51-871f74e59709 (Updating mon deployment (+2 -> 3))
Dec 11 09:15:04 compute-0 ceph-mgr[74715]: [progress INFO root] Completed event d0634e73-e62c-4742-8d51-871f74e59709 (Updating mon deployment (+2 -> 3)) in 8 seconds
Dec 11 09:15:04 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 11 09:15:04 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:04 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Dec 11 09:15:04 compute-0 ceph-mgr[74715]: [progress INFO root] update: starting ev 0e5d9a64-c6c6-4b16-98b6-9d283c58f546 (Updating mgr deployment (+2 -> 3))
Dec 11 09:15:04 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.uiimcn", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec 11 09:15:04 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.uiimcn", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 11 09:15:04 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Dec 11 09:15:04 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Dec 11 09:15:04 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 11 09:15:04 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 11 09:15:04 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 11 09:15:04 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 11 09:15:04 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 11 09:15:04 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Dec 11 09:15:04 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 11 09:15:04 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 11 09:15:04 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 11 09:15:04 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 11 09:15:04 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 11 09:15:04 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e32 e32: 2 total, 2 up, 2 in
Dec 11 09:15:04 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e32: 2 total, 2 up, 2 in
Dec 11 09:15:04 compute-0 ceph-mon[74426]: Deploying daemon mon.compute-1 on compute-1
Dec 11 09:15:04 compute-0 ceph-mon[74426]: 2.9 scrub starts
Dec 11 09:15:04 compute-0 ceph-mon[74426]: 2.9 scrub ok
Dec 11 09:15:04 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 11 09:15:04 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 11 09:15:04 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 11 09:15:04 compute-0 ceph-mon[74426]: mon.compute-0 calling monitor election
Dec 11 09:15:04 compute-0 ceph-mon[74426]: 4.1f scrub starts
Dec 11 09:15:04 compute-0 ceph-mon[74426]: 4.1f scrub ok
Dec 11 09:15:04 compute-0 ceph-mon[74426]: 2.7 scrub starts
Dec 11 09:15:04 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 11 09:15:04 compute-0 ceph-mon[74426]: 2.7 scrub ok
Dec 11 09:15:04 compute-0 ceph-mon[74426]: 3.17 scrub starts
Dec 11 09:15:04 compute-0 ceph-mon[74426]: 3.17 scrub ok
Dec 11 09:15:04 compute-0 ceph-mon[74426]: pgmap v88: 162 pgs: 162 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:15:04 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 11 09:15:04 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 11 09:15:04 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 11 09:15:04 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 11 09:15:04 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 11 09:15:04 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 11 09:15:04 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 11 09:15:04 compute-0 ceph-mon[74426]: 2.2 scrub starts
Dec 11 09:15:04 compute-0 ceph-mon[74426]: 2.2 scrub ok
Dec 11 09:15:04 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 11 09:15:04 compute-0 ceph-mon[74426]: mon.compute-2 calling monitor election
Dec 11 09:15:04 compute-0 ceph-mon[74426]: 4.11 deep-scrub starts
Dec 11 09:15:04 compute-0 ceph-mon[74426]: 4.11 deep-scrub ok
Dec 11 09:15:04 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 11 09:15:04 compute-0 ceph-mon[74426]: 2.6 scrub starts
Dec 11 09:15:04 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 11 09:15:04 compute-0 ceph-mon[74426]: 2.6 scrub ok
Dec 11 09:15:04 compute-0 ceph-mon[74426]: 3.16 scrub starts
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[2.19( empty local-lis/les=0/0 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=32) [1] r=0 lpr=32 pi=[26,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-0 ceph-mon[74426]: 3.16 scrub ok
Dec 11 09:15:04 compute-0 ceph-mon[74426]: pgmap v89: 162 pgs: 162 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:15:04 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 11 09:15:04 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 11 09:15:04 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 11 09:15:04 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 11 09:15:04 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 11 09:15:04 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 11 09:15:04 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 11 09:15:04 compute-0 ceph-mon[74426]: 2.1 deep-scrub starts
Dec 11 09:15:04 compute-0 ceph-mon[74426]: 2.1 deep-scrub ok
Dec 11 09:15:04 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 11 09:15:04 compute-0 ceph-mon[74426]: 3.18 scrub starts
Dec 11 09:15:04 compute-0 ceph-mon[74426]: 3.18 scrub ok
Dec 11 09:15:04 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 11 09:15:04 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 11 09:15:04 compute-0 ceph-mon[74426]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Dec 11 09:15:04 compute-0 ceph-mon[74426]: monmap epoch 2
Dec 11 09:15:04 compute-0 ceph-mon[74426]: fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:15:04 compute-0 ceph-mon[74426]: last_changed 2025-12-11T09:14:58.875471+0000
Dec 11 09:15:04 compute-0 ceph-mon[74426]: created 2025-12-11T09:12:23.814502+0000
Dec 11 09:15:04 compute-0 ceph-mon[74426]: min_mon_release 19 (squid)
Dec 11 09:15:04 compute-0 ceph-mon[74426]: election_strategy: 1
Dec 11 09:15:04 compute-0 ceph-mon[74426]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec 11 09:15:04 compute-0 ceph-mon[74426]: 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Dec 11 09:15:04 compute-0 ceph-mon[74426]: fsmap 
Dec 11 09:15:04 compute-0 ceph-mon[74426]: osdmap e31: 2 total, 2 up, 2 in
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[2.15( empty local-lis/les=0/0 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=32) [1] r=0 lpr=32 pi=[26,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-0 ceph-mon[74426]: mgrmap e9: compute-0.wwpcae(active, since 2m)
Dec 11 09:15:04 compute-0 ceph-mon[74426]: overall HEALTH_OK
Dec 11 09:15:04 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:04 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:04 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:04 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:04 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[2.13( empty local-lis/les=0/0 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=32) [1] r=0 lpr=32 pi=[26,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[2.10( empty local-lis/les=0/0 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=32) [1] r=0 lpr=32 pi=[26,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[2.e( empty local-lis/les=0/0 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=32) [1] r=0 lpr=32 pi=[26,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[2.c( empty local-lis/les=0/0 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=32) [1] r=0 lpr=32 pi=[26,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[2.d( empty local-lis/les=0/0 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=32) [1] r=0 lpr=32 pi=[26,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[2.a( empty local-lis/les=0/0 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=32) [1] r=0 lpr=32 pi=[26,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[2.1( empty local-lis/les=0/0 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=32) [1] r=0 lpr=32 pi=[26,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[2.6( empty local-lis/les=0/0 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=32) [1] r=0 lpr=32 pi=[26,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[2.4( empty local-lis/les=0/0 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=32) [1] r=0 lpr=32 pi=[26,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[2.9( empty local-lis/les=0/0 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=32) [1] r=0 lpr=32 pi=[26,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[2.1b( empty local-lis/les=0/0 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=32) [1] r=0 lpr=32 pi=[26,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[2.1f( empty local-lis/les=0/0 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=32) [1] r=0 lpr=32 pi=[26,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[2.1e( empty local-lis/les=0/0 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=32) [1] r=0 lpr=32 pi=[26,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[4.18( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=32 pruub=8.112821579s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active pruub 53.285682678s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[4.18( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=32 pruub=8.112794876s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.285682678s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[5.18( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=10.172683716s) [0] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active pruub 55.345809937s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[5.18( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=10.172664642s) [0] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.345809937s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[6.1a( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=10.169393539s) [0] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active pruub 55.342559814s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[3.1d( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=32 pruub=8.107460976s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active pruub 53.280788422s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[6.1a( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=10.169262886s) [0] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.342559814s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[3.1d( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=32 pruub=8.107445717s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.280788422s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[4.1a( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=32 pruub=8.107274055s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active pruub 53.280769348s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[4.1a( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=32 pruub=8.107259750s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.280769348s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[5.1b( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=10.172351837s) [0] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active pruub 55.345962524s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[5.1b( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=10.172333717s) [0] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.345962524s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[3.1c( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=32 pruub=8.107058525s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active pruub 53.280769348s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[3.1c( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=32 pruub=8.107042313s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.280769348s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.uiimcn", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[4.1b( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=32 pruub=8.106960297s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active pruub 53.280750275s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[4.1b( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=32 pruub=8.106934547s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.280750275s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[5.1a( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=10.171999931s) [0] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active pruub 55.345890045s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[5.1a( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=10.171980858s) [0] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.345890045s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[6.19( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=10.171876907s) [0] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active pruub 55.345844269s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[6.19( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=10.171851158s) [0] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.345844269s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[6.1e( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=10.171820641s) [0] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active pruub 55.346008301s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[6.1e( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=10.171804428s) [0] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.346008301s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[3.1a( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=32 pruub=8.106027603s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active pruub 53.280742645s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[3.1a( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=32 pruub=8.106009483s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.280742645s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[3.9( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=32 pruub=8.110706329s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active pruub 53.285644531s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[4.e( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=32 pruub=8.105772972s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active pruub 53.280719757s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[5.1c( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=10.171104431s) [0] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active pruub 55.346054077s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[3.9( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=32 pruub=8.110684395s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.285644531s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[4.e( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=32 pruub=8.105750084s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.280719757s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[5.1c( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=10.171081543s) [0] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.346054077s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[5.2( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=10.170656204s) [0] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active pruub 55.346084595s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[5.2( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=10.170635223s) [0] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.346084595s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[5.e( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=10.170576096s) [0] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active pruub 55.346054077s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[5.e( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=10.170557022s) [0] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.346054077s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[3.3( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=32 pruub=8.104991913s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active pruub 53.280513763s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[3.3( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=32 pruub=8.104973793s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.280513763s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[4.5( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=32 pruub=8.104793549s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active pruub 53.280475616s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[5.4( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=10.170772552s) [0] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active pruub 55.346458435s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[4.5( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=32 pruub=8.104766846s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.280475616s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[5.4( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=10.170732498s) [0] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.346458435s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[6.d( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=10.170871735s) [0] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active pruub 55.346233368s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[6.d( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=10.170340538s) [0] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.346233368s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[3.5( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=32 pruub=8.104123116s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active pruub 53.280246735s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[5.7( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=10.170102119s) [0] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active pruub 55.346240997s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[5.7( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=10.170079231s) [0] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.346240997s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[6.7( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=10.170031548s) [0] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active pruub 55.346248627s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[4.1( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=32 pruub=8.103942871s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active pruub 53.280231476s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[6.7( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=10.170017242s) [0] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.346248627s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[4.1( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=32 pruub=8.103921890s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.280231476s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[3.5( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=32 pruub=8.104103088s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.280246735s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[5.1( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=10.169897079s) [0] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active pruub 55.346420288s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[5.1( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=10.169872284s) [0] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.346420288s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[6.5( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=10.169737816s) [0] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active pruub 55.346385956s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[6.5( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=10.169718742s) [0] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.346385956s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[6.2( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=10.169686317s) [0] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active pruub 55.346363068s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[4.d( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=32 pruub=8.103207588s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active pruub 53.279911041s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[6.2( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=10.169664383s) [0] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.346363068s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[4.d( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=32 pruub=8.103191376s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.279911041s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[3.a( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=32 pruub=8.103253365s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active pruub 53.280052185s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[3.a( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=32 pruub=8.103233337s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.280052185s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[5.f( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=10.169182777s) [0] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active pruub 55.346069336s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[5.f( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=10.169165611s) [0] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.346069336s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[3.c( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=32 pruub=8.102690697s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active pruub 53.279781342s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[6.3( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=10.173582077s) [0] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active pruub 55.350677490s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[3.c( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=32 pruub=8.102669716s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.279781342s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[4.a( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=32 pruub=8.102451324s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active pruub 53.279582977s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[6.3( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=10.173550606s) [0] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.350677490s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[4.a( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=32 pruub=8.102437019s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.279582977s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[3.d( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=32 pruub=8.102348328s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active pruub 53.279628754s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[4.9( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=32 pruub=8.102073669s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active pruub 53.279445648s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[6.8( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=10.169135094s) [0] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active pruub 55.346504211s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[4.9( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=32 pruub=8.102051735s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.279445648s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[6.8( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=10.169103622s) [0] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.346504211s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[3.e( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=32 pruub=8.101864815s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active pruub 53.279438019s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[3.e( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=32 pruub=8.101835251s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.279438019s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[4.8( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=32 pruub=8.101656914s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active pruub 53.279335022s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[4.8( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=32 pruub=8.101629257s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.279335022s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[3.d( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=32 pruub=8.102333069s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.279628754s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[4.c( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=32 pruub=8.102038383s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active pruub 53.279819489s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[6.e( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=10.168646812s) [0] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active pruub 55.346500397s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[6.e( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=10.168619156s) [0] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.346500397s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[4.c( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=32 pruub=8.101945877s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.279819489s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[3.f( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=32 pruub=8.102037430s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active pruub 53.279808044s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[5.9( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=10.172378540s) [0] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active pruub 55.350433350s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[5.9( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=10.172226906s) [0] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.350433350s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[3.f( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=32 pruub=8.101552963s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.279808044s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[5.16( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=10.172147751s) [0] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active pruub 55.350551605s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[6.15( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=10.171963692s) [0] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active pruub 55.350395203s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[5.16( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=10.172131538s) [0] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.350551605s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[6.15( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=10.171944618s) [0] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.350395203s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[3.10( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=32 pruub=8.100766182s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active pruub 53.279319763s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[3.11( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=32 pruub=8.100195885s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active pruub 53.278839111s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[3.10( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=32 pruub=8.100734711s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.279319763s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[3.11( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=32 pruub=8.100174904s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.278839111s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[6.a( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=10.172364235s) [0] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active pruub 55.350471497s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[4.15( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=32 pruub=8.100248337s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active pruub 53.279010773s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[4.15( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=32 pruub=8.100208282s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.279010773s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[6.17( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=10.171737671s) [0] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active pruub 55.350601196s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[6.17( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=10.171709061s) [0] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.350601196s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[3.13( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=32 pruub=8.100236893s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active pruub 53.279155731s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[3.13( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=32 pruub=8.100214005s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.279155731s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[6.a( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=10.171628952s) [0] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.350471497s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[5.15( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=10.171705246s) [0] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active pruub 55.350742340s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[5.15( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=10.171690941s) [0] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.350742340s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[3.14( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=32 pruub=8.099457741s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active pruub 53.278583527s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[3.14( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=32 pruub=8.099443436s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.278583527s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[4.13( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=32 pruub=8.099595070s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active pruub 53.278766632s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[4.13( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=32 pruub=8.099563599s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.278766632s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[3.15( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=32 pruub=8.101330757s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active pruub 53.280643463s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[3.15( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=32 pruub=8.101308823s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.280643463s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[3.16( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=32 pruub=8.098817825s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active pruub 53.278354645s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[3.16( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=32 pruub=8.098784447s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.278354645s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[6.12( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=10.170985222s) [0] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active pruub 55.350757599s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[4.1f( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=32 pruub=8.093479156s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active pruub 53.273330688s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[5.11( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=10.171149254s) [0] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active pruub 55.351005554s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[5.10( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=10.170863152s) [0] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active pruub 55.350738525s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[5.11( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=10.171118736s) [0] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.351005554s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[4.1f( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=32 pruub=8.093430519s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.273330688s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 11 09:15:04 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[5.10( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=10.170823097s) [0] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.350738525s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[5.1f( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=10.170714378s) [0] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active pruub 55.350898743s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[5.1f( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=10.170694351s) [0] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.350898743s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[6.1c( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=10.170547485s) [0] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active pruub 55.350769043s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[6.1c( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=10.170524597s) [0] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.350769043s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 32 pg[6.12( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=10.170963287s) [0] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.350757599s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 11 09:15:04 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:15:04 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-2.uiimcn on compute-2
Dec 11 09:15:04 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-2.uiimcn on compute-2
Dec 11 09:15:04 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 3.1f deep-scrub starts
Dec 11 09:15:04 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 3.1f deep-scrub ok
Dec 11 09:15:04 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v91: 193 pgs: 31 unknown, 162 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:15:04 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Dec 11 09:15:04 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).monmap v2 adding/updating compute-1 at [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to monitor cluster
Dec 11 09:15:04 compute-0 ceph-mgr[74715]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/4028123285; not ready for session (expect reconnect)
Dec 11 09:15:04 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 11 09:15:04 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 11 09:15:04 compute-0 ceph-mgr[74715]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Dec 11 09:15:04 compute-0 ceph-mon[74426]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec 11 09:15:04 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 11 09:15:04 compute-0 ceph-mon[74426]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 11 09:15:04 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 11 09:15:04 compute-0 ceph-mon[74426]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec 11 09:15:04 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 11 09:15:04 compute-0 ceph-mgr[74715]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Dec 11 09:15:04 compute-0 ceph-mon[74426]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Dec 11 09:15:04 compute-0 ceph-mon[74426]: paxos.0).electionLogic(10) init, last seen epoch 10
Dec 11 09:15:04 compute-0 ceph-mon[74426]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 11 09:15:04 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:04.878+0000 7f8d349e3640 -1 mgr.server handle_report got status from non-daemon mon.compute-2
Dec 11 09:15:04 compute-0 ceph-mgr[74715]: mgr.server handle_report got status from non-daemon mon.compute-2
Dec 11 09:15:05 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Dec 11 09:15:05 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Dec 11 09:15:05 compute-0 ceph-mon[74426]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 11 09:15:05 compute-0 ceph-mon[74426]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 11 09:15:05 compute-0 ceph-mon[74426]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 11 09:15:05 compute-0 ceph-mon[74426]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 11 09:15:05 compute-0 ceph-mgr[74715]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/4028123285; not ready for session (expect reconnect)
Dec 11 09:15:05 compute-0 ceph-mon[74426]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 11 09:15:05 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 11 09:15:05 compute-0 ceph-mgr[74715]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Dec 11 09:15:05 compute-0 ceph-mon[74426]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 11 09:15:06 compute-0 ceph-mon[74426]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 11 09:15:06 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Dec 11 09:15:06 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Dec 11 09:15:06 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v92: 193 pgs: 15 peering, 31 unknown, 147 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:15:06 compute-0 ceph-mon[74426]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 11 09:15:06 compute-0 ceph-mgr[74715]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/4028123285; not ready for session (expect reconnect)
Dec 11 09:15:06 compute-0 ceph-mon[74426]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 11 09:15:06 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 11 09:15:06 compute-0 ceph-mgr[74715]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Dec 11 09:15:06 compute-0 ceph-mon[74426]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 11 09:15:07 compute-0 ceph-mon[74426]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 11 09:15:07 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Dec 11 09:15:07 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Dec 11 09:15:07 compute-0 ceph-mgr[74715]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/4028123285; not ready for session (expect reconnect)
Dec 11 09:15:07 compute-0 ceph-mon[74426]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 11 09:15:07 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 11 09:15:07 compute-0 ceph-mgr[74715]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Dec 11 09:15:08 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 6.1b scrub starts
Dec 11 09:15:08 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 6.1b scrub ok
Dec 11 09:15:08 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v93: 193 pgs: 78 peering, 31 unknown, 84 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:15:08 compute-0 ceph-mon[74426]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 11 09:15:08 compute-0 ceph-mon[74426]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 11 09:15:08 compute-0 ceph-mon[74426]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 11 09:15:08 compute-0 ceph-mgr[74715]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/4028123285; not ready for session (expect reconnect)
Dec 11 09:15:08 compute-0 ceph-mon[74426]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 11 09:15:08 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 11 09:15:08 compute-0 ceph-mgr[74715]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Dec 11 09:15:08 compute-0 ceph-mon[74426]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 11 09:15:08 compute-0 ceph-mgr[74715]: [progress INFO root] Writing back 10 completed events
Dec 11 09:15:08 compute-0 ceph-mon[74426]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 11 09:15:09 compute-0 ceph-mon[74426]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 11 09:15:09 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 6.18 deep-scrub starts
Dec 11 09:15:09 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 6.18 deep-scrub ok
Dec 11 09:15:09 compute-0 ceph-mon[74426]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 11 09:15:09 compute-0 ceph-mgr[74715]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/4028123285; not ready for session (expect reconnect)
Dec 11 09:15:09 compute-0 ceph-mon[74426]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 11 09:15:09 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 11 09:15:09 compute-0 ceph-mgr[74715]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Dec 11 09:15:09 compute-0 ceph-mon[74426]: paxos.0).electionLogic(11) init, last seen epoch 11, mid-election, bumping
Dec 11 09:15:09 compute-0 ceph-mon[74426]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 11 09:15:09 compute-0 ceph-mon[74426]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Dec 11 09:15:09 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : monmap epoch 3
Dec 11 09:15:09 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:15:09 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : last_changed 2025-12-11T09:15:04.733539+0000
Dec 11 09:15:09 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : created 2025-12-11T09:12:23.814502+0000
Dec 11 09:15:09 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Dec 11 09:15:09 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : election_strategy: 1
Dec 11 09:15:09 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec 11 09:15:09 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Dec 11 09:15:09 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : 2: [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon.compute-1
Dec 11 09:15:09 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 11 09:15:09 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : fsmap 
Dec 11 09:15:09 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e32: 2 total, 2 up, 2 in
Dec 11 09:15:09 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.wwpcae(active, since 2m)
Dec 11 09:15:09 compute-0 ceph-mon[74426]: log_channel(cluster) log [INF] : overall HEALTH_OK
Dec 11 09:15:09 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:09 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 11 09:15:09 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:09 compute-0 ceph-mgr[74715]: [progress WARNING root] Starting Global Recovery Event,109 pgs not in active + clean state
Dec 11 09:15:09 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:09 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 11 09:15:09 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:09 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Dec 11 09:15:09 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-1.unesvp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec 11 09:15:09 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.unesvp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 11 09:15:09 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e33 e33: 2 total, 2 up, 2 in
Dec 11 09:15:09 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e33: 2 total, 2 up, 2 in
Dec 11 09:15:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 33 pg[2.19( empty local-lis/les=32/33 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=32) [1] r=0 lpr=32 pi=[26,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 33 pg[2.1e( empty local-lis/les=32/33 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=32) [1] r=0 lpr=32 pi=[26,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 33 pg[2.1f( empty local-lis/les=32/33 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=32) [1] r=0 lpr=32 pi=[26,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 33 pg[2.15( empty local-lis/les=32/33 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=32) [1] r=0 lpr=32 pi=[26,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 33 pg[2.1b( empty local-lis/les=32/33 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=32) [1] r=0 lpr=32 pi=[26,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 33 pg[2.9( empty local-lis/les=32/33 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=32) [1] r=0 lpr=32 pi=[26,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 33 pg[2.4( empty local-lis/les=32/33 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=32) [1] r=0 lpr=32 pi=[26,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 33 pg[2.6( empty local-lis/les=32/33 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=32) [1] r=0 lpr=32 pi=[26,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 33 pg[2.d( empty local-lis/les=32/33 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=32) [1] r=0 lpr=32 pi=[26,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 33 pg[2.c( empty local-lis/les=32/33 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=32) [1] r=0 lpr=32 pi=[26,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 33 pg[2.1( empty local-lis/les=32/33 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=32) [1] r=0 lpr=32 pi=[26,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 33 pg[2.10( empty local-lis/les=32/33 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=32) [1] r=0 lpr=32 pi=[26,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 33 pg[2.13( empty local-lis/les=32/33 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=32) [1] r=0 lpr=32 pi=[26,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 33 pg[2.e( empty local-lis/les=32/33 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=32) [1] r=0 lpr=32 pi=[26,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 33 pg[2.a( empty local-lis/les=32/33 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=32) [1] r=0 lpr=32 pi=[26,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-0 ceph-mon[74426]: 3.1f deep-scrub starts
Dec 11 09:15:09 compute-0 ceph-mon[74426]: 3.1f deep-scrub ok
Dec 11 09:15:09 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 11 09:15:09 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 11 09:15:09 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 11 09:15:09 compute-0 ceph-mon[74426]: mon.compute-0 calling monitor election
Dec 11 09:15:09 compute-0 ceph-mon[74426]: mon.compute-2 calling monitor election
Dec 11 09:15:09 compute-0 ceph-mon[74426]: 2.18 scrub starts
Dec 11 09:15:09 compute-0 ceph-mon[74426]: 2.18 scrub ok
Dec 11 09:15:09 compute-0 ceph-mon[74426]: 5.19 scrub starts
Dec 11 09:15:09 compute-0 ceph-mon[74426]: 5.19 scrub ok
Dec 11 09:15:09 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 11 09:15:09 compute-0 ceph-mon[74426]: 2.17 scrub starts
Dec 11 09:15:09 compute-0 ceph-mon[74426]: 2.17 scrub ok
Dec 11 09:15:09 compute-0 ceph-mon[74426]: 3.1e scrub starts
Dec 11 09:15:09 compute-0 ceph-mon[74426]: 3.1e scrub ok
Dec 11 09:15:09 compute-0 ceph-mon[74426]: pgmap v92: 193 pgs: 15 peering, 31 unknown, 147 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:15:09 compute-0 ceph-mon[74426]: mon.compute-1 calling monitor election
Dec 11 09:15:09 compute-0 ceph-mon[74426]: 2.1a scrub starts
Dec 11 09:15:09 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 11 09:15:09 compute-0 ceph-mon[74426]: 2.1a scrub ok
Dec 11 09:15:09 compute-0 ceph-mon[74426]: 4.19 scrub starts
Dec 11 09:15:09 compute-0 ceph-mon[74426]: 4.19 scrub ok
Dec 11 09:15:09 compute-0 ceph-mon[74426]: 2.14 scrub starts
Dec 11 09:15:09 compute-0 ceph-mon[74426]: 2.14 scrub ok
Dec 11 09:15:09 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 11 09:15:09 compute-0 ceph-mon[74426]: 6.1b scrub starts
Dec 11 09:15:09 compute-0 ceph-mon[74426]: 6.1b scrub ok
Dec 11 09:15:09 compute-0 ceph-mon[74426]: pgmap v93: 193 pgs: 78 peering, 31 unknown, 84 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:15:09 compute-0 ceph-mon[74426]: 2.12 deep-scrub starts
Dec 11 09:15:09 compute-0 ceph-mon[74426]: 2.12 deep-scrub ok
Dec 11 09:15:09 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 11 09:15:09 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 11 09:15:09 compute-0 ceph-mon[74426]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Dec 11 09:15:09 compute-0 ceph-mon[74426]: monmap epoch 3
Dec 11 09:15:09 compute-0 ceph-mon[74426]: fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:15:09 compute-0 ceph-mon[74426]: last_changed 2025-12-11T09:15:04.733539+0000
Dec 11 09:15:09 compute-0 ceph-mon[74426]: created 2025-12-11T09:12:23.814502+0000
Dec 11 09:15:09 compute-0 ceph-mon[74426]: min_mon_release 19 (squid)
Dec 11 09:15:09 compute-0 ceph-mon[74426]: election_strategy: 1
Dec 11 09:15:09 compute-0 ceph-mon[74426]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec 11 09:15:09 compute-0 ceph-mon[74426]: 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Dec 11 09:15:09 compute-0 ceph-mon[74426]: 2: [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon.compute-1
Dec 11 09:15:09 compute-0 ceph-mon[74426]: fsmap 
Dec 11 09:15:09 compute-0 ceph-mon[74426]: osdmap e32: 2 total, 2 up, 2 in
Dec 11 09:15:09 compute-0 ceph-mon[74426]: mgrmap e9: compute-0.wwpcae(active, since 2m)
Dec 11 09:15:09 compute-0 ceph-mon[74426]: overall HEALTH_OK
Dec 11 09:15:09 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:09 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:09 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:09 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:09 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.unesvp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec 11 09:15:09 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 11 09:15:09 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 11 09:15:09 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 11 09:15:09 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:15:09 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-1.unesvp on compute-1
Dec 11 09:15:09 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-1.unesvp on compute-1
Dec 11 09:15:10 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Dec 11 09:15:10 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1693698922' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec 11 09:15:10 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1693698922' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec 11 09:15:10 compute-0 nice_sinoussi[86423]: 
Dec 11 09:15:10 compute-0 nice_sinoussi[86423]: [global]
Dec 11 09:15:10 compute-0 nice_sinoussi[86423]:         fsid = 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:15:10 compute-0 nice_sinoussi[86423]:         mon_host = 192.168.122.100
Dec 11 09:15:10 compute-0 systemd[1]: libpod-1e874aaabb7c3cd0e5ab56e73f114f4eba2c517ee9437fcc9c236e71352744a8.scope: Deactivated successfully.
Dec 11 09:15:10 compute-0 podman[86407]: 2025-12-11 09:15:10.187121341 +0000 UTC m=+11.006953220 container died 1e874aaabb7c3cd0e5ab56e73f114f4eba2c517ee9437fcc9c236e71352744a8 (image=quay.io/ceph/ceph:v19, name=nice_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec 11 09:15:10 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v95: 193 pgs: 78 peering, 31 unknown, 84 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:15:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-b16f0d8a24d61c818909c73952ca2fe56c8cc615d3c64101911db6619c957b81-merged.mount: Deactivated successfully.
Dec 11 09:15:10 compute-0 podman[86407]: 2025-12-11 09:15:10.243862976 +0000 UTC m=+11.063694855 container remove 1e874aaabb7c3cd0e5ab56e73f114f4eba2c517ee9437fcc9c236e71352744a8 (image=quay.io/ceph/ceph:v19, name=nice_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:15:10 compute-0 systemd[1]: libpod-conmon-1e874aaabb7c3cd0e5ab56e73f114f4eba2c517ee9437fcc9c236e71352744a8.scope: Deactivated successfully.
Dec 11 09:15:10 compute-0 sudo[86404]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:10 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Dec 11 09:15:10 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Dec 11 09:15:10 compute-0 sudo[86482]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqndeinkummsvbzeccpsbbtpfztmekcm ; /usr/bin/python3'
Dec 11 09:15:10 compute-0 sudo[86482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:15:10 compute-0 python3[86484]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:15:10 compute-0 podman[86485]: 2025-12-11 09:15:10.673234723 +0000 UTC m=+0.057134299 container create 1b1234ae4fbaee5522371ee2e7bbc7ac7dcc769d5e474f2b762ed33c97a2562e (image=quay.io/ceph/ceph:v19, name=youthful_agnesi, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec 11 09:15:10 compute-0 systemd[1]: Started libpod-conmon-1b1234ae4fbaee5522371ee2e7bbc7ac7dcc769d5e474f2b762ed33c97a2562e.scope.
Dec 11 09:15:10 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:15:10 compute-0 ceph-mgr[74715]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/4028123285; not ready for session (expect reconnect)
Dec 11 09:15:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1142a350663b1f569e0b06db3704b0a5b0b6e1eae8de6cca486ee784d3b74fc2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1142a350663b1f569e0b06db3704b0a5b0b6e1eae8de6cca486ee784d3b74fc2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1142a350663b1f569e0b06db3704b0a5b0b6e1eae8de6cca486ee784d3b74fc2/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:10 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 11 09:15:10 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 11 09:15:10 compute-0 podman[86485]: 2025-12-11 09:15:10.652346306 +0000 UTC m=+0.036245892 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:15:10 compute-0 podman[86485]: 2025-12-11 09:15:10.750073972 +0000 UTC m=+0.133973568 container init 1b1234ae4fbaee5522371ee2e7bbc7ac7dcc769d5e474f2b762ed33c97a2562e (image=quay.io/ceph/ceph:v19, name=youthful_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:15:10 compute-0 podman[86485]: 2025-12-11 09:15:10.758281975 +0000 UTC m=+0.142181541 container start 1b1234ae4fbaee5522371ee2e7bbc7ac7dcc769d5e474f2b762ed33c97a2562e (image=quay.io/ceph/ceph:v19, name=youthful_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec 11 09:15:10 compute-0 podman[86485]: 2025-12-11 09:15:10.76253943 +0000 UTC m=+0.146438996 container attach 1b1234ae4fbaee5522371ee2e7bbc7ac7dcc769d5e474f2b762ed33c97a2562e (image=quay.io/ceph/ceph:v19, name=youthful_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:15:10 compute-0 ceph-mon[74426]: 6.18 deep-scrub starts
Dec 11 09:15:10 compute-0 ceph-mon[74426]: 6.18 deep-scrub ok
Dec 11 09:15:10 compute-0 ceph-mon[74426]: 2.11 scrub starts
Dec 11 09:15:10 compute-0 ceph-mon[74426]: 2.11 scrub ok
Dec 11 09:15:10 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.unesvp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 11 09:15:10 compute-0 ceph-mon[74426]: osdmap e33: 2 total, 2 up, 2 in
Dec 11 09:15:10 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.unesvp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec 11 09:15:10 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 11 09:15:10 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:15:10 compute-0 ceph-mon[74426]: Deploying daemon mgr.compute-1.unesvp on compute-1
Dec 11 09:15:10 compute-0 ceph-mon[74426]: from='client.? 192.168.122.100:0/1693698922' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec 11 09:15:10 compute-0 ceph-mon[74426]: from='client.? 192.168.122.100:0/1693698922' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec 11 09:15:10 compute-0 ceph-mon[74426]: pgmap v95: 193 pgs: 78 peering, 31 unknown, 84 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:15:10 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 11 09:15:11 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Dec 11 09:15:11 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Dec 11 09:15:11 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0)
Dec 11 09:15:11 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1088465051' entity='client.admin' 
Dec 11 09:15:11 compute-0 youthful_agnesi[86500]: set ssl_option
Dec 11 09:15:11 compute-0 systemd[1]: libpod-1b1234ae4fbaee5522371ee2e7bbc7ac7dcc769d5e474f2b762ed33c97a2562e.scope: Deactivated successfully.
Dec 11 09:15:11 compute-0 podman[86485]: 2025-12-11 09:15:11.280680668 +0000 UTC m=+0.664580234 container died 1b1234ae4fbaee5522371ee2e7bbc7ac7dcc769d5e474f2b762ed33c97a2562e (image=quay.io/ceph/ceph:v19, name=youthful_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 11 09:15:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-1142a350663b1f569e0b06db3704b0a5b0b6e1eae8de6cca486ee784d3b74fc2-merged.mount: Deactivated successfully.
Dec 11 09:15:11 compute-0 podman[86485]: 2025-12-11 09:15:11.327174105 +0000 UTC m=+0.711073671 container remove 1b1234ae4fbaee5522371ee2e7bbc7ac7dcc769d5e474f2b762ed33c97a2562e (image=quay.io/ceph/ceph:v19, name=youthful_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:15:11 compute-0 systemd[1]: libpod-conmon-1b1234ae4fbaee5522371ee2e7bbc7ac7dcc769d5e474f2b762ed33c97a2562e.scope: Deactivated successfully.
Dec 11 09:15:11 compute-0 sudo[86482]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:11 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 11 09:15:11 compute-0 sudo[86559]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzmxworcfpppvegraorzhltxwkpbryjx ; /usr/bin/python3'
Dec 11 09:15:11 compute-0 sudo[86559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:15:11 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:11 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 11 09:15:11 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:11 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 11 09:15:11 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:11 compute-0 ceph-mgr[74715]: [progress INFO root] complete: finished ev 0e5d9a64-c6c6-4b16-98b6-9d283c58f546 (Updating mgr deployment (+2 -> 3))
Dec 11 09:15:11 compute-0 ceph-mgr[74715]: [progress INFO root] Completed event 0e5d9a64-c6c6-4b16-98b6-9d283c58f546 (Updating mgr deployment (+2 -> 3)) in 8 seconds
Dec 11 09:15:11 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 11 09:15:11 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:11 compute-0 ceph-mgr[74715]: [progress INFO root] update: starting ev 22a7f850-437c-4891-8ad5-6ed33f1ca1ca (Updating crash deployment (+1 -> 3))
Dec 11 09:15:11 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec 11 09:15:11 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 11 09:15:11 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec 11 09:15:11 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:11.737+0000 7f8d349e3640 -1 mgr.server handle_report got status from non-daemon mon.compute-1
Dec 11 09:15:11 compute-0 ceph-mgr[74715]: mgr.server handle_report got status from non-daemon mon.compute-1
Dec 11 09:15:11 compute-0 python3[86561]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:15:11 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 11 09:15:11 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:15:11 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-2 on compute-2
Dec 11 09:15:11 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-2 on compute-2
Dec 11 09:15:11 compute-0 podman[86562]: 2025-12-11 09:15:11.811622124 +0000 UTC m=+0.049678330 container create c72cb6e93ba3790dfdc26e722d9759bfe9725f3ec164b03edfb3adddbcb99673 (image=quay.io/ceph/ceph:v19, name=sweet_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:15:11 compute-0 systemd[1]: Started libpod-conmon-c72cb6e93ba3790dfdc26e722d9759bfe9725f3ec164b03edfb3adddbcb99673.scope.
Dec 11 09:15:11 compute-0 podman[86562]: 2025-12-11 09:15:11.78864643 +0000 UTC m=+0.026702666 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:15:12 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v96: 193 pgs: 78 peering, 31 unknown, 84 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:15:12 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Dec 11 09:15:12 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:15:12 compute-0 ceph-mon[74426]: 3.1b scrub starts
Dec 11 09:15:12 compute-0 ceph-mon[74426]: 3.1b scrub ok
Dec 11 09:15:12 compute-0 ceph-mon[74426]: 2.f scrub starts
Dec 11 09:15:12 compute-0 ceph-mon[74426]: 2.f scrub ok
Dec 11 09:15:12 compute-0 ceph-mon[74426]: from='client.? 192.168.122.100:0/1088465051' entity='client.admin' 
Dec 11 09:15:12 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:12 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:12 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:12 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:12 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 11 09:15:12 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec 11 09:15:12 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:15:12 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Dec 11 09:15:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2118c2912c41d13644ee6c371eec0c0b81778cc212d8934d465f8141bf1afc7b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2118c2912c41d13644ee6c371eec0c0b81778cc212d8934d465f8141bf1afc7b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2118c2912c41d13644ee6c371eec0c0b81778cc212d8934d465f8141bf1afc7b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:12 compute-0 podman[86562]: 2025-12-11 09:15:12.320808985 +0000 UTC m=+0.558865211 container init c72cb6e93ba3790dfdc26e722d9759bfe9725f3ec164b03edfb3adddbcb99673 (image=quay.io/ceph/ceph:v19, name=sweet_hypatia, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec 11 09:15:12 compute-0 podman[86562]: 2025-12-11 09:15:12.328989287 +0000 UTC m=+0.567045493 container start c72cb6e93ba3790dfdc26e722d9759bfe9725f3ec164b03edfb3adddbcb99673 (image=quay.io/ceph/ceph:v19, name=sweet_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:15:12 compute-0 podman[86562]: 2025-12-11 09:15:12.409435491 +0000 UTC m=+0.647491697 container attach c72cb6e93ba3790dfdc26e722d9759bfe9725f3ec164b03edfb3adddbcb99673 (image=quay.io/ceph/ceph:v19, name=sweet_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 11 09:15:12 compute-0 ceph-mgr[74715]: log_channel(audit) log [DBG] : from='client.14244 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 11 09:15:12 compute-0 ceph-mgr[74715]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec 11 09:15:12 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec 11 09:15:12 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec 11 09:15:12 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:12 compute-0 ceph-mgr[74715]: [cephadm INFO root] Saving service ingress.rgw.default spec with placement count:2
Dec 11 09:15:12 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Saving service ingress.rgw.default spec with placement count:2
Dec 11 09:15:12 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Dec 11 09:15:12 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:12 compute-0 sweet_hypatia[86577]: Scheduled rgw.rgw update...
Dec 11 09:15:12 compute-0 sweet_hypatia[86577]: Scheduled ingress.rgw.default update...
Dec 11 09:15:12 compute-0 systemd[1]: libpod-c72cb6e93ba3790dfdc26e722d9759bfe9725f3ec164b03edfb3adddbcb99673.scope: Deactivated successfully.
Dec 11 09:15:12 compute-0 podman[86562]: 2025-12-11 09:15:12.808815177 +0000 UTC m=+1.046871403 container died c72cb6e93ba3790dfdc26e722d9759bfe9725f3ec164b03edfb3adddbcb99673 (image=quay.io/ceph/ceph:v19, name=sweet_hypatia, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 11 09:15:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-2118c2912c41d13644ee6c371eec0c0b81778cc212d8934d465f8141bf1afc7b-merged.mount: Deactivated successfully.
Dec 11 09:15:12 compute-0 podman[86562]: 2025-12-11 09:15:12.894355225 +0000 UTC m=+1.132411431 container remove c72cb6e93ba3790dfdc26e722d9759bfe9725f3ec164b03edfb3adddbcb99673 (image=quay.io/ceph/ceph:v19, name=sweet_hypatia, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 11 09:15:12 compute-0 systemd[1]: libpod-conmon-c72cb6e93ba3790dfdc26e722d9759bfe9725f3ec164b03edfb3adddbcb99673.scope: Deactivated successfully.
Dec 11 09:15:12 compute-0 sudo[86559]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:13 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 6.1f scrub starts
Dec 11 09:15:13 compute-0 ceph-mon[74426]: 4.1c scrub starts
Dec 11 09:15:13 compute-0 ceph-mon[74426]: 4.1c scrub ok
Dec 11 09:15:13 compute-0 ceph-mon[74426]: Deploying daemon crash.compute-2 on compute-2
Dec 11 09:15:13 compute-0 ceph-mon[74426]: 2.16 scrub starts
Dec 11 09:15:13 compute-0 ceph-mon[74426]: 2.16 scrub ok
Dec 11 09:15:13 compute-0 ceph-mon[74426]: pgmap v96: 193 pgs: 78 peering, 31 unknown, 84 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:15:13 compute-0 ceph-mon[74426]: 5.1d scrub starts
Dec 11 09:15:13 compute-0 ceph-mon[74426]: 5.1d scrub ok
Dec 11 09:15:13 compute-0 ceph-mon[74426]: 2.3 scrub starts
Dec 11 09:15:13 compute-0 ceph-mon[74426]: 2.3 scrub ok
Dec 11 09:15:13 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:13 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:13 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 6.1f scrub ok
Dec 11 09:15:13 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 11 09:15:13 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:13 compute-0 python3[86688]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_dashboard.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 11 09:15:13 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 11 09:15:13 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:13 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec 11 09:15:13 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:13 compute-0 ceph-mgr[74715]: [progress INFO root] complete: finished ev 22a7f850-437c-4891-8ad5-6ed33f1ca1ca (Updating crash deployment (+1 -> 3))
Dec 11 09:15:13 compute-0 ceph-mgr[74715]: [progress INFO root] Completed event 22a7f850-437c-4891-8ad5-6ed33f1ca1ca (Updating crash deployment (+1 -> 3)) in 2 seconds
Dec 11 09:15:13 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec 11 09:15:13 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:13 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 11 09:15:13 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 11 09:15:13 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 11 09:15:13 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 11 09:15:13 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 11 09:15:13 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:15:13 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 11 09:15:13 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 11 09:15:13 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 11 09:15:13 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:15:13 compute-0 sudo[86700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:15:13 compute-0 sudo[86700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:13 compute-0 sudo[86700]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:13 compute-0 sudo[86749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 11 09:15:13 compute-0 sudo[86749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:13 compute-0 python3[86809]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765444513.0816746-37154-260529381233471/source dest=/tmp/ceph_dashboard.yml mode=0644 force=True follow=False _original_basename=ceph_monitoring_stack.yml.j2 checksum=2701faaa92cae31b5bbad92984c27e2af7a44b84 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:15:13 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:15:14 compute-0 podman[86874]: 2025-12-11 09:15:14.04657998 +0000 UTC m=+0.048699890 container create 62a95fe8171a16b6eacaa4bbdcc33af02dc689822c1f80c4c893c805ba79c854 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_mccarthy, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Dec 11 09:15:14 compute-0 systemd[1]: Started libpod-conmon-62a95fe8171a16b6eacaa4bbdcc33af02dc689822c1f80c4c893c805ba79c854.scope.
Dec 11 09:15:14 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:15:14 compute-0 podman[86874]: 2025-12-11 09:15:14.106294281 +0000 UTC m=+0.108414211 container init 62a95fe8171a16b6eacaa4bbdcc33af02dc689822c1f80c4c893c805ba79c854 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_mccarthy, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 11 09:15:14 compute-0 podman[86874]: 2025-12-11 09:15:14.11221511 +0000 UTC m=+0.114335020 container start 62a95fe8171a16b6eacaa4bbdcc33af02dc689822c1f80c4c893c805ba79c854 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_mccarthy, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 11 09:15:14 compute-0 podman[86874]: 2025-12-11 09:15:14.11629112 +0000 UTC m=+0.118411130 container attach 62a95fe8171a16b6eacaa4bbdcc33af02dc689822c1f80c4c893c805ba79c854 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_mccarthy, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:15:14 compute-0 distracted_mccarthy[86890]: 167 167
Dec 11 09:15:14 compute-0 systemd[1]: libpod-62a95fe8171a16b6eacaa4bbdcc33af02dc689822c1f80c4c893c805ba79c854.scope: Deactivated successfully.
Dec 11 09:15:14 compute-0 podman[86874]: 2025-12-11 09:15:14.120432263 +0000 UTC m=+0.122552173 container died 62a95fe8171a16b6eacaa4bbdcc33af02dc689822c1f80c4c893c805ba79c854 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_mccarthy, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Dec 11 09:15:14 compute-0 podman[86874]: 2025-12-11 09:15:14.027800269 +0000 UTC m=+0.029920179 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:15:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-7000bd6220f132ac7e0a53d11403638dd4981e1ae089f88ca723d9f421114081-merged.mount: Deactivated successfully.
Dec 11 09:15:14 compute-0 podman[86874]: 2025-12-11 09:15:14.161508987 +0000 UTC m=+0.163628897 container remove 62a95fe8171a16b6eacaa4bbdcc33af02dc689822c1f80c4c893c805ba79c854 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_mccarthy, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 11 09:15:14 compute-0 systemd[1]: libpod-conmon-62a95fe8171a16b6eacaa4bbdcc33af02dc689822c1f80c4c893c805ba79c854.scope: Deactivated successfully.
Dec 11 09:15:14 compute-0 sudo[86931]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmlmvwqxlkuzdibbwbbvrgmjhjnyuqhp ; /usr/bin/python3'
Dec 11 09:15:14 compute-0 sudo[86931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:15:14 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v97: 193 pgs: 15 peering, 178 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:15:14 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Dec 11 09:15:14 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Dec 11 09:15:14 compute-0 podman[86939]: 2025-12-11 09:15:14.35164663 +0000 UTC m=+0.047801500 container create 544ebadd71e9d9d36c013dccaaae4c037161114ba2ac869a0ffed4c1a85a942d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_turing, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:15:14 compute-0 python3[86933]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_dashboard.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:15:14 compute-0 systemd[1]: Started libpod-conmon-544ebadd71e9d9d36c013dccaaae4c037161114ba2ac869a0ffed4c1a85a942d.scope.
Dec 11 09:15:14 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:15:14 compute-0 podman[86953]: 2025-12-11 09:15:14.428590362 +0000 UTC m=+0.049292288 container create 749c2d9736b5bdbf6a6cd0fb2dd6340ce99961b44a1bd0f0b26eb30da2bfbe8c (image=quay.io/ceph/ceph:v19, name=nice_bose, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 11 09:15:14 compute-0 podman[86939]: 2025-12-11 09:15:14.33415081 +0000 UTC m=+0.030305700 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:15:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be15d4b2dd92ea7a1c8334cca3d910ac0af57166dafe09529ad8367e9bb598f9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be15d4b2dd92ea7a1c8334cca3d910ac0af57166dafe09529ad8367e9bb598f9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be15d4b2dd92ea7a1c8334cca3d910ac0af57166dafe09529ad8367e9bb598f9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be15d4b2dd92ea7a1c8334cca3d910ac0af57166dafe09529ad8367e9bb598f9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be15d4b2dd92ea7a1c8334cca3d910ac0af57166dafe09529ad8367e9bb598f9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:14 compute-0 podman[86939]: 2025-12-11 09:15:14.480557714 +0000 UTC m=+0.176712604 container init 544ebadd71e9d9d36c013dccaaae4c037161114ba2ac869a0ffed4c1a85a942d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:15:14 compute-0 systemd[1]: Started libpod-conmon-749c2d9736b5bdbf6a6cd0fb2dd6340ce99961b44a1bd0f0b26eb30da2bfbe8c.scope.
Dec 11 09:15:14 compute-0 podman[86939]: 2025-12-11 09:15:14.493536229 +0000 UTC m=+0.189691099 container start 544ebadd71e9d9d36c013dccaaae4c037161114ba2ac869a0ffed4c1a85a942d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_turing, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:15:14 compute-0 podman[86939]: 2025-12-11 09:15:14.497707793 +0000 UTC m=+0.193862663 container attach 544ebadd71e9d9d36c013dccaaae4c037161114ba2ac869a0ffed4c1a85a942d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_turing, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:15:14 compute-0 podman[86953]: 2025-12-11 09:15:14.406488355 +0000 UTC m=+0.027190281 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:15:14 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:15:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e272bb3a1e257d15aa6208daaa1b7f95205382d8012c61bcfd9e1d4db34e9101/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e272bb3a1e257d15aa6208daaa1b7f95205382d8012c61bcfd9e1d4db34e9101/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e272bb3a1e257d15aa6208daaa1b7f95205382d8012c61bcfd9e1d4db34e9101/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:14 compute-0 ceph-mon[74426]: from='client.14244 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 11 09:15:14 compute-0 ceph-mon[74426]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec 11 09:15:14 compute-0 ceph-mon[74426]: Saving service ingress.rgw.default spec with placement count:2
Dec 11 09:15:14 compute-0 ceph-mon[74426]: 6.1f scrub starts
Dec 11 09:15:14 compute-0 ceph-mon[74426]: 6.1f scrub ok
Dec 11 09:15:14 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:14 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:14 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:14 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:14 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 11 09:15:14 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 11 09:15:14 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:15:14 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 11 09:15:14 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:15:14 compute-0 ceph-mon[74426]: 2.5 scrub starts
Dec 11 09:15:14 compute-0 ceph-mon[74426]: 2.5 scrub ok
Dec 11 09:15:14 compute-0 podman[86953]: 2025-12-11 09:15:14.548715846 +0000 UTC m=+0.169417782 container init 749c2d9736b5bdbf6a6cd0fb2dd6340ce99961b44a1bd0f0b26eb30da2bfbe8c (image=quay.io/ceph/ceph:v19, name=nice_bose, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 11 09:15:14 compute-0 podman[86953]: 2025-12-11 09:15:14.554912874 +0000 UTC m=+0.175614800 container start 749c2d9736b5bdbf6a6cd0fb2dd6340ce99961b44a1bd0f0b26eb30da2bfbe8c (image=quay.io/ceph/ceph:v19, name=nice_bose, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 11 09:15:14 compute-0 podman[86953]: 2025-12-11 09:15:14.55793265 +0000 UTC m=+0.178634586 container attach 749c2d9736b5bdbf6a6cd0fb2dd6340ce99961b44a1bd0f0b26eb30da2bfbe8c (image=quay.io/ceph/ceph:v19, name=nice_bose, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 11 09:15:14 compute-0 ceph-mgr[74715]: [progress INFO root] Writing back 12 completed events
Dec 11 09:15:14 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 11 09:15:14 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:14 compute-0 heuristic_turing[86968]: --> passed data devices: 0 physical, 1 LVM
Dec 11 09:15:14 compute-0 heuristic_turing[86968]: --> All data devices are unavailable
Dec 11 09:15:14 compute-0 systemd[1]: libpod-544ebadd71e9d9d36c013dccaaae4c037161114ba2ac869a0ffed4c1a85a942d.scope: Deactivated successfully.
Dec 11 09:15:14 compute-0 podman[86939]: 2025-12-11 09:15:14.949918621 +0000 UTC m=+0.646073511 container died 544ebadd71e9d9d36c013dccaaae4c037161114ba2ac869a0ffed4c1a85a942d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_turing, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:15:14 compute-0 ceph-mgr[74715]: log_channel(audit) log [DBG] : from='client.14250 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 11 09:15:14 compute-0 ceph-mgr[74715]: [cephadm INFO root] Saving service node-exporter spec with placement *
Dec 11 09:15:14 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Saving service node-exporter spec with placement *
Dec 11 09:15:14 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Dec 11 09:15:14 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:14 compute-0 ceph-mgr[74715]: [cephadm INFO root] Saving service grafana spec with placement compute-0;count:1
Dec 11 09:15:14 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Saving service grafana spec with placement compute-0;count:1
Dec 11 09:15:14 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Dec 11 09:15:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-be15d4b2dd92ea7a1c8334cca3d910ac0af57166dafe09529ad8367e9bb598f9-merged.mount: Deactivated successfully.
Dec 11 09:15:14 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:14 compute-0 ceph-mgr[74715]: [cephadm INFO root] Saving service prometheus spec with placement compute-0;count:1
Dec 11 09:15:14 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Saving service prometheus spec with placement compute-0;count:1
Dec 11 09:15:14 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Dec 11 09:15:15 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:15 compute-0 podman[86939]: 2025-12-11 09:15:15.01488646 +0000 UTC m=+0.711041330 container remove 544ebadd71e9d9d36c013dccaaae4c037161114ba2ac869a0ffed4c1a85a942d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_turing, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:15:15 compute-0 ceph-mgr[74715]: [cephadm INFO root] Saving service alertmanager spec with placement compute-0;count:1
Dec 11 09:15:15 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Saving service alertmanager spec with placement compute-0;count:1
Dec 11 09:15:15 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Dec 11 09:15:15 compute-0 systemd[1]: libpod-conmon-544ebadd71e9d9d36c013dccaaae4c037161114ba2ac869a0ffed4c1a85a942d.scope: Deactivated successfully.
Dec 11 09:15:15 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:15 compute-0 nice_bose[86975]: Scheduled node-exporter update...
Dec 11 09:15:15 compute-0 nice_bose[86975]: Scheduled grafana update...
Dec 11 09:15:15 compute-0 nice_bose[86975]: Scheduled prometheus update...
Dec 11 09:15:15 compute-0 nice_bose[86975]: Scheduled alertmanager update...
Dec 11 09:15:15 compute-0 systemd[1]: libpod-749c2d9736b5bdbf6a6cd0fb2dd6340ce99961b44a1bd0f0b26eb30da2bfbe8c.scope: Deactivated successfully.
Dec 11 09:15:15 compute-0 podman[86953]: 2025-12-11 09:15:15.07648667 +0000 UTC m=+0.697188606 container died 749c2d9736b5bdbf6a6cd0fb2dd6340ce99961b44a1bd0f0b26eb30da2bfbe8c (image=quay.io/ceph/ceph:v19, name=nice_bose, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 11 09:15:15 compute-0 sudo[86749]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-e272bb3a1e257d15aa6208daaa1b7f95205382d8012c61bcfd9e1d4db34e9101-merged.mount: Deactivated successfully.
Dec 11 09:15:15 compute-0 podman[86953]: 2025-12-11 09:15:15.125017703 +0000 UTC m=+0.745719629 container remove 749c2d9736b5bdbf6a6cd0fb2dd6340ce99961b44a1bd0f0b26eb30da2bfbe8c (image=quay.io/ceph/ceph:v19, name=nice_bose, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec 11 09:15:15 compute-0 systemd[1]: libpod-conmon-749c2d9736b5bdbf6a6cd0fb2dd6340ce99961b44a1bd0f0b26eb30da2bfbe8c.scope: Deactivated successfully.
Dec 11 09:15:15 compute-0 sudo[86931]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:15 compute-0 sudo[87029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:15:15 compute-0 sudo[87029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:15 compute-0 sudo[87029]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:15 compute-0 sudo[87060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -- lvm list --format json
Dec 11 09:15:15 compute-0 sudo[87060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:15 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 6.c scrub starts
Dec 11 09:15:15 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 6.c scrub ok
Dec 11 09:15:15 compute-0 sudo[87128]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uihrhppavcaeulmbzibblpfereriqyov ; /usr/bin/python3'
Dec 11 09:15:15 compute-0 sudo[87128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:15:15 compute-0 ceph-mon[74426]: pgmap v97: 193 pgs: 15 peering, 178 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:15:15 compute-0 ceph-mon[74426]: 4.1d scrub starts
Dec 11 09:15:15 compute-0 ceph-mon[74426]: 4.1d scrub ok
Dec 11 09:15:15 compute-0 ceph-mon[74426]: 2.1c deep-scrub starts
Dec 11 09:15:15 compute-0 ceph-mon[74426]: 2.1c deep-scrub ok
Dec 11 09:15:15 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:15 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:15 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:15 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:15 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:15 compute-0 python3[87134]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/server_port 8443 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:15:15 compute-0 podman[87156]: 2025-12-11 09:15:15.784647708 +0000 UTC m=+0.057209711 container create f890e424c445f951fe405b442c314d6b699461dfeb6e76ef8dcd308a9fac6027 (image=quay.io/ceph/ceph:v19, name=blissful_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 11 09:15:15 compute-0 systemd[1]: Started libpod-conmon-f890e424c445f951fe405b442c314d6b699461dfeb6e76ef8dcd308a9fac6027.scope.
Dec 11 09:15:15 compute-0 podman[87156]: 2025-12-11 09:15:15.762488709 +0000 UTC m=+0.035050732 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:15:15 compute-0 podman[87149]: 2025-12-11 09:15:15.862944773 +0000 UTC m=+0.211063344 container create fe0c5594b356e46e7571019ff20256fffe33a39468b8f255538dcf10880b2b60 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_chatelet, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:15:15 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:15:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0369e393f00f472fb4824c251aef1e32e7eb79c73e6507370a281182e7670c5/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0369e393f00f472fb4824c251aef1e32e7eb79c73e6507370a281182e7670c5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0369e393f00f472fb4824c251aef1e32e7eb79c73e6507370a281182e7670c5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:15 compute-0 podman[87156]: 2025-12-11 09:15:15.903676356 +0000 UTC m=+0.176238369 container init f890e424c445f951fe405b442c314d6b699461dfeb6e76ef8dcd308a9fac6027 (image=quay.io/ceph/ceph:v19, name=blissful_mestorf, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:15:15 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd new", "uuid": "dff00437-d089-48b8-a12a-b56f6f1647c7"} v 0)
Dec 11 09:15:15 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "dff00437-d089-48b8-a12a-b56f6f1647c7"}]: dispatch
Dec 11 09:15:15 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Dec 11 09:15:15 compute-0 systemd[1]: Started libpod-conmon-fe0c5594b356e46e7571019ff20256fffe33a39468b8f255538dcf10880b2b60.scope.
Dec 11 09:15:15 compute-0 podman[87156]: 2025-12-11 09:15:15.91691934 +0000 UTC m=+0.189481333 container start f890e424c445f951fe405b442c314d6b699461dfeb6e76ef8dcd308a9fac6027 (image=quay.io/ceph/ceph:v19, name=blissful_mestorf, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 11 09:15:15 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "dff00437-d089-48b8-a12a-b56f6f1647c7"}]': finished
Dec 11 09:15:15 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e34 e34: 3 total, 2 up, 3 in
Dec 11 09:15:15 compute-0 podman[87156]: 2025-12-11 09:15:15.922618272 +0000 UTC m=+0.195180265 container attach f890e424c445f951fe405b442c314d6b699461dfeb6e76ef8dcd308a9fac6027 (image=quay.io/ceph/ceph:v19, name=blissful_mestorf, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:15:15 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 2 up, 3 in
Dec 11 09:15:15 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 11 09:15:15 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 11 09:15:15 compute-0 ceph-mgr[74715]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 11 09:15:15 compute-0 podman[87149]: 2025-12-11 09:15:15.841153506 +0000 UTC m=+0.189272097 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:15:15 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:15:15 compute-0 podman[87149]: 2025-12-11 09:15:15.962031833 +0000 UTC m=+0.310150414 container init fe0c5594b356e46e7571019ff20256fffe33a39468b8f255538dcf10880b2b60 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_chatelet, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:15:15 compute-0 podman[87149]: 2025-12-11 09:15:15.970724551 +0000 UTC m=+0.318843152 container start fe0c5594b356e46e7571019ff20256fffe33a39468b8f255538dcf10880b2b60 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_chatelet, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 11 09:15:15 compute-0 podman[87149]: 2025-12-11 09:15:15.975524545 +0000 UTC m=+0.323643116 container attach fe0c5594b356e46e7571019ff20256fffe33a39468b8f255538dcf10880b2b60 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_chatelet, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:15:15 compute-0 crazy_chatelet[87184]: 167 167
Dec 11 09:15:15 compute-0 systemd[1]: libpod-fe0c5594b356e46e7571019ff20256fffe33a39468b8f255538dcf10880b2b60.scope: Deactivated successfully.
Dec 11 09:15:15 compute-0 conmon[87184]: conmon fe0c5594b356e46e7571 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fe0c5594b356e46e7571019ff20256fffe33a39468b8f255538dcf10880b2b60.scope/container/memory.events
Dec 11 09:15:15 compute-0 podman[87149]: 2025-12-11 09:15:15.979553624 +0000 UTC m=+0.327672185 container died fe0c5594b356e46e7571019ff20256fffe33a39468b8f255538dcf10880b2b60 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 11 09:15:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-b4c4b011b3be57f671e02b812cb4504fa24ec830176919aa9a8981a340c68c9d-merged.mount: Deactivated successfully.
Dec 11 09:15:16 compute-0 podman[87149]: 2025-12-11 09:15:16.122810256 +0000 UTC m=+0.470928817 container remove fe0c5594b356e46e7571019ff20256fffe33a39468b8f255538dcf10880b2b60 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_chatelet, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:15:16 compute-0 systemd[1]: libpod-conmon-fe0c5594b356e46e7571019ff20256fffe33a39468b8f255538dcf10880b2b60.scope: Deactivated successfully.
Dec 11 09:15:16 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v99: 193 pgs: 193 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:15:16 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 11 09:15:16 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 11 09:15:16 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Dec 11 09:15:16 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Dec 11 09:15:16 compute-0 podman[87226]: 2025-12-11 09:15:16.306488163 +0000 UTC m=+0.051525459 container create 1df5e228df50e979b4f56839ec4d9315c24f11f7ecbdc692d489b17575d4215d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_keldysh, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:15:16 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/server_port}] v 0)
Dec 11 09:15:16 compute-0 systemd[1]: Started libpod-conmon-1df5e228df50e979b4f56839ec4d9315c24f11f7ecbdc692d489b17575d4215d.scope.
Dec 11 09:15:16 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3760831286' entity='client.admin' 
Dec 11 09:15:16 compute-0 podman[87226]: 2025-12-11 09:15:16.283135956 +0000 UTC m=+0.028173252 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:15:16 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:15:16 compute-0 systemd[1]: libpod-f890e424c445f951fe405b442c314d6b699461dfeb6e76ef8dcd308a9fac6027.scope: Deactivated successfully.
Dec 11 09:15:16 compute-0 podman[87156]: 2025-12-11 09:15:16.397115352 +0000 UTC m=+0.669677355 container died f890e424c445f951fe405b442c314d6b699461dfeb6e76ef8dcd308a9fac6027 (image=quay.io/ceph/ceph:v19, name=blissful_mestorf, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec 11 09:15:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/979aaaec25fa8282630eacd59610d12c765d2b4b2b526958600ef19d26498c7e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/979aaaec25fa8282630eacd59610d12c765d2b4b2b526958600ef19d26498c7e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/979aaaec25fa8282630eacd59610d12c765d2b4b2b526958600ef19d26498c7e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/979aaaec25fa8282630eacd59610d12c765d2b4b2b526958600ef19d26498c7e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:16 compute-0 podman[87226]: 2025-12-11 09:15:16.41955716 +0000 UTC m=+0.164594446 container init 1df5e228df50e979b4f56839ec4d9315c24f11f7ecbdc692d489b17575d4215d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Dec 11 09:15:16 compute-0 podman[87226]: 2025-12-11 09:15:16.430496561 +0000 UTC m=+0.175533827 container start 1df5e228df50e979b4f56839ec4d9315c24f11f7ecbdc692d489b17575d4215d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:15:16 compute-0 podman[87226]: 2025-12-11 09:15:16.434937683 +0000 UTC m=+0.179974959 container attach 1df5e228df50e979b4f56839ec4d9315c24f11f7ecbdc692d489b17575d4215d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_keldysh, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Dec 11 09:15:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-e0369e393f00f472fb4824c251aef1e32e7eb79c73e6507370a281182e7670c5-merged.mount: Deactivated successfully.
Dec 11 09:15:16 compute-0 podman[87156]: 2025-12-11 09:15:16.468141165 +0000 UTC m=+0.740703158 container remove f890e424c445f951fe405b442c314d6b699461dfeb6e76ef8dcd308a9fac6027 (image=quay.io/ceph/ceph:v19, name=blissful_mestorf, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec 11 09:15:16 compute-0 systemd[1]: libpod-conmon-f890e424c445f951fe405b442c314d6b699461dfeb6e76ef8dcd308a9fac6027.scope: Deactivated successfully.
Dec 11 09:15:16 compute-0 sudo[87128]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:16 compute-0 ceph-mon[74426]: from='client.14250 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 11 09:15:16 compute-0 ceph-mon[74426]: Saving service node-exporter spec with placement *
Dec 11 09:15:16 compute-0 ceph-mon[74426]: Saving service grafana spec with placement compute-0;count:1
Dec 11 09:15:16 compute-0 ceph-mon[74426]: Saving service prometheus spec with placement compute-0;count:1
Dec 11 09:15:16 compute-0 ceph-mon[74426]: Saving service alertmanager spec with placement compute-0;count:1
Dec 11 09:15:16 compute-0 ceph-mon[74426]: 6.c scrub starts
Dec 11 09:15:16 compute-0 ceph-mon[74426]: 6.c scrub ok
Dec 11 09:15:16 compute-0 ceph-mon[74426]: 2.b scrub starts
Dec 11 09:15:16 compute-0 ceph-mon[74426]: 2.b scrub ok
Dec 11 09:15:16 compute-0 ceph-mon[74426]: from='client.? 192.168.122.102:0/1606460407' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "dff00437-d089-48b8-a12a-b56f6f1647c7"}]: dispatch
Dec 11 09:15:16 compute-0 ceph-mon[74426]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "dff00437-d089-48b8-a12a-b56f6f1647c7"}]: dispatch
Dec 11 09:15:16 compute-0 ceph-mon[74426]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "dff00437-d089-48b8-a12a-b56f6f1647c7"}]': finished
Dec 11 09:15:16 compute-0 ceph-mon[74426]: osdmap e34: 3 total, 2 up, 3 in
Dec 11 09:15:16 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 11 09:15:16 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 11 09:15:16 compute-0 ceph-mon[74426]: from='client.? 192.168.122.100:0/3760831286' entity='client.admin' 
Dec 11 09:15:16 compute-0 ceph-mon[74426]: from='client.? 192.168.122.102:0/1631062346' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Dec 11 09:15:16 compute-0 sudo[87285]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxreshvonyrcmaypwaioogjdvxfpqlgb ; /usr/bin/python3'
Dec 11 09:15:16 compute-0 sudo[87285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:15:16 compute-0 stupefied_keldysh[87244]: {
Dec 11 09:15:16 compute-0 stupefied_keldysh[87244]:     "1": [
Dec 11 09:15:16 compute-0 stupefied_keldysh[87244]:         {
Dec 11 09:15:16 compute-0 stupefied_keldysh[87244]:             "devices": [
Dec 11 09:15:16 compute-0 stupefied_keldysh[87244]:                 "/dev/loop3"
Dec 11 09:15:16 compute-0 stupefied_keldysh[87244]:             ],
Dec 11 09:15:16 compute-0 stupefied_keldysh[87244]:             "lv_name": "ceph_lv0",
Dec 11 09:15:16 compute-0 stupefied_keldysh[87244]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 11 09:15:16 compute-0 stupefied_keldysh[87244]:             "lv_size": "21470642176",
Dec 11 09:15:16 compute-0 stupefied_keldysh[87244]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TIjPv6-JN4c-U1Hf-F3k1-2mgj-CwGi-pbjIHh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=865a308b-cdc3-4034-b5eb-feb596b462bf,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 11 09:15:16 compute-0 stupefied_keldysh[87244]:             "lv_uuid": "TIjPv6-JN4c-U1Hf-F3k1-2mgj-CwGi-pbjIHh",
Dec 11 09:15:16 compute-0 stupefied_keldysh[87244]:             "name": "ceph_lv0",
Dec 11 09:15:16 compute-0 stupefied_keldysh[87244]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 11 09:15:16 compute-0 stupefied_keldysh[87244]:             "tags": {
Dec 11 09:15:16 compute-0 stupefied_keldysh[87244]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 11 09:15:16 compute-0 stupefied_keldysh[87244]:                 "ceph.block_uuid": "TIjPv6-JN4c-U1Hf-F3k1-2mgj-CwGi-pbjIHh",
Dec 11 09:15:16 compute-0 stupefied_keldysh[87244]:                 "ceph.cephx_lockbox_secret": "",
Dec 11 09:15:16 compute-0 stupefied_keldysh[87244]:                 "ceph.cluster_fsid": "31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060",
Dec 11 09:15:16 compute-0 stupefied_keldysh[87244]:                 "ceph.cluster_name": "ceph",
Dec 11 09:15:16 compute-0 stupefied_keldysh[87244]:                 "ceph.crush_device_class": "",
Dec 11 09:15:16 compute-0 stupefied_keldysh[87244]:                 "ceph.encrypted": "0",
Dec 11 09:15:16 compute-0 stupefied_keldysh[87244]:                 "ceph.osd_fsid": "865a308b-cdc3-4034-b5eb-feb596b462bf",
Dec 11 09:15:16 compute-0 stupefied_keldysh[87244]:                 "ceph.osd_id": "1",
Dec 11 09:15:16 compute-0 stupefied_keldysh[87244]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 11 09:15:16 compute-0 stupefied_keldysh[87244]:                 "ceph.type": "block",
Dec 11 09:15:16 compute-0 stupefied_keldysh[87244]:                 "ceph.vdo": "0",
Dec 11 09:15:16 compute-0 stupefied_keldysh[87244]:                 "ceph.with_tpm": "0"
Dec 11 09:15:16 compute-0 stupefied_keldysh[87244]:             },
Dec 11 09:15:16 compute-0 stupefied_keldysh[87244]:             "type": "block",
Dec 11 09:15:16 compute-0 stupefied_keldysh[87244]:             "vg_name": "ceph_vg0"
Dec 11 09:15:16 compute-0 stupefied_keldysh[87244]:         }
Dec 11 09:15:16 compute-0 stupefied_keldysh[87244]:     ]
Dec 11 09:15:16 compute-0 stupefied_keldysh[87244]: }
Dec 11 09:15:16 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.uiimcn started
Dec 11 09:15:16 compute-0 ceph-mgr[74715]: mgr.server handle_open ignoring open from mgr.compute-2.uiimcn 192.168.122.102:0/592002921; not ready for session (expect reconnect)
Dec 11 09:15:16 compute-0 systemd[1]: libpod-1df5e228df50e979b4f56839ec4d9315c24f11f7ecbdc692d489b17575d4215d.scope: Deactivated successfully.
Dec 11 09:15:16 compute-0 conmon[87244]: conmon 1df5e228df50e979b4f5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1df5e228df50e979b4f56839ec4d9315c24f11f7ecbdc692d489b17575d4215d.scope/container/memory.events
Dec 11 09:15:16 compute-0 podman[87226]: 2025-12-11 09:15:16.793841136 +0000 UTC m=+0.538878402 container died 1df5e228df50e979b4f56839ec4d9315c24f11f7ecbdc692d489b17575d4215d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Dec 11 09:15:16 compute-0 python3[87288]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/ssl_server_port 8443 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:15:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-979aaaec25fa8282630eacd59610d12c765d2b4b2b526958600ef19d26498c7e-merged.mount: Deactivated successfully.
Dec 11 09:15:16 compute-0 podman[87226]: 2025-12-11 09:15:16.843002158 +0000 UTC m=+0.588039424 container remove 1df5e228df50e979b4f56839ec4d9315c24f11f7ecbdc692d489b17575d4215d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_keldysh, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:15:16 compute-0 systemd[1]: libpod-conmon-1df5e228df50e979b4f56839ec4d9315c24f11f7ecbdc692d489b17575d4215d.scope: Deactivated successfully.
Dec 11 09:15:16 compute-0 podman[87298]: 2025-12-11 09:15:16.903470013 +0000 UTC m=+0.062268503 container create 03fb0b117920bb7044aac18ba8671af275aa2e28f59edb5a00dbaf534f14f5ff (image=quay.io/ceph/ceph:v19, name=vigorous_nash, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:15:16 compute-0 sudo[87060]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:16 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Dec 11 09:15:16 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 11 09:15:16 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e35 e35: 3 total, 2 up, 3 in
Dec 11 09:15:16 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 2 up, 3 in
Dec 11 09:15:16 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 11 09:15:16 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 11 09:15:16 compute-0 ceph-mgr[74715]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 11 09:15:16 compute-0 systemd[1]: Started libpod-conmon-03fb0b117920bb7044aac18ba8671af275aa2e28f59edb5a00dbaf534f14f5ff.scope.
Dec 11 09:15:16 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:15:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fb2eef8dfc9ad2fd109f9c344e3d70e9ef92b7def71088df21ad17ba1dfe719/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fb2eef8dfc9ad2fd109f9c344e3d70e9ef92b7def71088df21ad17ba1dfe719/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fb2eef8dfc9ad2fd109f9c344e3d70e9ef92b7def71088df21ad17ba1dfe719/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:16 compute-0 podman[87298]: 2025-12-11 09:15:16.885449316 +0000 UTC m=+0.044247836 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:15:16 compute-0 sudo[87316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:15:16 compute-0 sudo[87316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:16 compute-0 podman[87298]: 2025-12-11 09:15:16.991036884 +0000 UTC m=+0.149835414 container init 03fb0b117920bb7044aac18ba8671af275aa2e28f59edb5a00dbaf534f14f5ff (image=quay.io/ceph/ceph:v19, name=vigorous_nash, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 11 09:15:16 compute-0 sudo[87316]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:17 compute-0 podman[87298]: 2025-12-11 09:15:17.002531232 +0000 UTC m=+0.161329732 container start 03fb0b117920bb7044aac18ba8671af275aa2e28f59edb5a00dbaf534f14f5ff (image=quay.io/ceph/ceph:v19, name=vigorous_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Dec 11 09:15:17 compute-0 podman[87298]: 2025-12-11 09:15:17.006467268 +0000 UTC m=+0.165265768 container attach 03fb0b117920bb7044aac18ba8671af275aa2e28f59edb5a00dbaf534f14f5ff (image=quay.io/ceph/ceph:v19, name=vigorous_nash, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 11 09:15:17 compute-0 sudo[87347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -- raw list --format json
Dec 11 09:15:17 compute-0 sudo[87347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:17 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 4.f scrub starts
Dec 11 09:15:17 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 4.f scrub ok
Dec 11 09:15:17 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ssl_server_port}] v 0)
Dec 11 09:15:17 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1303046427' entity='client.admin' 
Dec 11 09:15:17 compute-0 systemd[1]: libpod-03fb0b117920bb7044aac18ba8671af275aa2e28f59edb5a00dbaf534f14f5ff.scope: Deactivated successfully.
Dec 11 09:15:17 compute-0 podman[87298]: 2025-12-11 09:15:17.394185022 +0000 UTC m=+0.552983512 container died 03fb0b117920bb7044aac18ba8671af275aa2e28f59edb5a00dbaf534f14f5ff (image=quay.io/ceph/ceph:v19, name=vigorous_nash, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 11 09:15:17 compute-0 ceph-mon[74426]: pgmap v99: 193 pgs: 193 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:15:17 compute-0 ceph-mon[74426]: 3.8 scrub starts
Dec 11 09:15:17 compute-0 ceph-mon[74426]: 3.8 scrub ok
Dec 11 09:15:17 compute-0 ceph-mon[74426]: 6.1e deep-scrub starts
Dec 11 09:15:17 compute-0 ceph-mon[74426]: 6.1e deep-scrub ok
Dec 11 09:15:17 compute-0 ceph-mon[74426]: Standby manager daemon compute-2.uiimcn started
Dec 11 09:15:17 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 11 09:15:17 compute-0 ceph-mon[74426]: osdmap e35: 3 total, 2 up, 3 in
Dec 11 09:15:17 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 11 09:15:17 compute-0 ceph-mon[74426]: from='client.? 192.168.122.100:0/1303046427' entity='client.admin' 
Dec 11 09:15:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-1fb2eef8dfc9ad2fd109f9c344e3d70e9ef92b7def71088df21ad17ba1dfe719-merged.mount: Deactivated successfully.
Dec 11 09:15:17 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.wwpcae(active, since 2m), standbys: compute-2.uiimcn
Dec 11 09:15:17 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.uiimcn", "id": "compute-2.uiimcn"} v 0)
Dec 11 09:15:17 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mgr metadata", "who": "compute-2.uiimcn", "id": "compute-2.uiimcn"}]: dispatch
Dec 11 09:15:17 compute-0 podman[87298]: 2025-12-11 09:15:17.644979217 +0000 UTC m=+0.803777707 container remove 03fb0b117920bb7044aac18ba8671af275aa2e28f59edb5a00dbaf534f14f5ff (image=quay.io/ceph/ceph:v19, name=vigorous_nash, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec 11 09:15:17 compute-0 systemd[1]: libpod-conmon-03fb0b117920bb7044aac18ba8671af275aa2e28f59edb5a00dbaf534f14f5ff.scope: Deactivated successfully.
Dec 11 09:15:17 compute-0 sudo[87285]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:17 compute-0 podman[87441]: 2025-12-11 09:15:17.720477252 +0000 UTC m=+0.043613067 container create 1776e39e88206d7424139b35cd134c52009de53ceb9f402f282e9a8d3b1f239c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Dec 11 09:15:17 compute-0 systemd[1]: Started libpod-conmon-1776e39e88206d7424139b35cd134c52009de53ceb9f402f282e9a8d3b1f239c.scope.
Dec 11 09:15:17 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:15:17 compute-0 podman[87441]: 2025-12-11 09:15:17.701568127 +0000 UTC m=+0.024703962 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:15:17 compute-0 podman[87441]: 2025-12-11 09:15:17.807516417 +0000 UTC m=+0.130652252 container init 1776e39e88206d7424139b35cd134c52009de53ceb9f402f282e9a8d3b1f239c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:15:17 compute-0 podman[87441]: 2025-12-11 09:15:17.814683245 +0000 UTC m=+0.137819060 container start 1776e39e88206d7424139b35cd134c52009de53ceb9f402f282e9a8d3b1f239c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_grothendieck, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Dec 11 09:15:17 compute-0 podman[87441]: 2025-12-11 09:15:17.818118306 +0000 UTC m=+0.141254141 container attach 1776e39e88206d7424139b35cd134c52009de53ceb9f402f282e9a8d3b1f239c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:15:17 compute-0 nostalgic_grothendieck[87458]: 167 167
Dec 11 09:15:17 compute-0 systemd[1]: libpod-1776e39e88206d7424139b35cd134c52009de53ceb9f402f282e9a8d3b1f239c.scope: Deactivated successfully.
Dec 11 09:15:17 compute-0 sudo[87483]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oldtiihipvaxwgvrggkhjynojhcklnpq ; /usr/bin/python3'
Dec 11 09:15:17 compute-0 podman[87441]: 2025-12-11 09:15:17.8191691 +0000 UTC m=+0.142304915 container died 1776e39e88206d7424139b35cd134c52009de53ceb9f402f282e9a8d3b1f239c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 11 09:15:17 compute-0 sudo[87483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:15:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba43c9298a70d7769ddcd743babb1bb1a0e02dec2cfab58e062b4798e431f9d9-merged.mount: Deactivated successfully.
Dec 11 09:15:17 compute-0 podman[87441]: 2025-12-11 09:15:17.860235453 +0000 UTC m=+0.183371268 container remove 1776e39e88206d7424139b35cd134c52009de53ceb9f402f282e9a8d3b1f239c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_grothendieck, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:15:17 compute-0 systemd[1]: libpod-conmon-1776e39e88206d7424139b35cd134c52009de53ceb9f402f282e9a8d3b1f239c.scope: Deactivated successfully.
Dec 11 09:15:17 compute-0 python3[87488]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/ssl false _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:15:18 compute-0 podman[87505]: 2025-12-11 09:15:18.021484722 +0000 UTC m=+0.050679323 container create 06b329b58bfac80263d6cbc3e2ca093f1c050073b1ad7352b5295ea8e49a4b37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_bhaskara, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:15:18 compute-0 podman[87512]: 2025-12-11 09:15:18.039109556 +0000 UTC m=+0.054335840 container create bb45a82d3d7fab566c64075ba02ce214c16f0f39e9d8f3154d3e4cbbe0c222b6 (image=quay.io/ceph/ceph:v19, name=youthful_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:15:18 compute-0 systemd[1]: Started libpod-conmon-06b329b58bfac80263d6cbc3e2ca093f1c050073b1ad7352b5295ea8e49a4b37.scope.
Dec 11 09:15:18 compute-0 systemd[1]: Started libpod-conmon-bb45a82d3d7fab566c64075ba02ce214c16f0f39e9d8f3154d3e4cbbe0c222b6.scope.
Dec 11 09:15:18 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:15:18 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:15:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1867d113366dc753833532b044f9fdfab74745461c17475fc152bceedcdd6c5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4f4cd61f7e427180847f9b108040a25c88693f8237a8087332fbfb7826f0efc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4f4cd61f7e427180847f9b108040a25c88693f8237a8087332fbfb7826f0efc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4f4cd61f7e427180847f9b108040a25c88693f8237a8087332fbfb7826f0efc/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1867d113366dc753833532b044f9fdfab74745461c17475fc152bceedcdd6c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1867d113366dc753833532b044f9fdfab74745461c17475fc152bceedcdd6c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1867d113366dc753833532b044f9fdfab74745461c17475fc152bceedcdd6c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:18 compute-0 podman[87505]: 2025-12-11 09:15:17.998885678 +0000 UTC m=+0.028080309 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:15:18 compute-0 podman[87505]: 2025-12-11 09:15:18.098493596 +0000 UTC m=+0.127688217 container init 06b329b58bfac80263d6cbc3e2ca093f1c050073b1ad7352b5295ea8e49a4b37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_bhaskara, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Dec 11 09:15:18 compute-0 podman[87512]: 2025-12-11 09:15:18.014500159 +0000 UTC m=+0.029726473 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:15:18 compute-0 podman[87505]: 2025-12-11 09:15:18.106372098 +0000 UTC m=+0.135566699 container start 06b329b58bfac80263d6cbc3e2ca093f1c050073b1ad7352b5295ea8e49a4b37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0)
Dec 11 09:15:18 compute-0 podman[87512]: 2025-12-11 09:15:18.108295469 +0000 UTC m=+0.123521773 container init bb45a82d3d7fab566c64075ba02ce214c16f0f39e9d8f3154d3e4cbbe0c222b6 (image=quay.io/ceph/ceph:v19, name=youthful_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:15:18 compute-0 podman[87505]: 2025-12-11 09:15:18.111180711 +0000 UTC m=+0.140375532 container attach 06b329b58bfac80263d6cbc3e2ca093f1c050073b1ad7352b5295ea8e49a4b37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_bhaskara, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec 11 09:15:18 compute-0 podman[87512]: 2025-12-11 09:15:18.115545301 +0000 UTC m=+0.130771585 container start bb45a82d3d7fab566c64075ba02ce214c16f0f39e9d8f3154d3e4cbbe0c222b6 (image=quay.io/ceph/ceph:v19, name=youthful_hertz, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:15:18 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.unesvp started
Dec 11 09:15:18 compute-0 ceph-mgr[74715]: mgr.server handle_open ignoring open from mgr.compute-1.unesvp 192.168.122.101:0/841876903; not ready for session (expect reconnect)
Dec 11 09:15:18 compute-0 podman[87512]: 2025-12-11 09:15:18.119294531 +0000 UTC m=+0.134520815 container attach bb45a82d3d7fab566c64075ba02ce214c16f0f39e9d8f3154d3e4cbbe0c222b6 (image=quay.io/ceph/ceph:v19, name=youthful_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:15:18 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v101: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:15:18 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 35 pg[7.13( empty local-lis/les=0/0 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:18 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 35 pg[7.10( empty local-lis/les=0/0 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:18 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 35 pg[7.14( empty local-lis/les=0/0 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:18 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 35 pg[7.1d( empty local-lis/les=0/0 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:18 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 35 pg[7.a( empty local-lis/les=0/0 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:18 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 35 pg[7.8( empty local-lis/les=0/0 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:18 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 35 pg[7.b( empty local-lis/les=0/0 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:18 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 35 pg[7.9( empty local-lis/les=0/0 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:18 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 35 pg[7.6( empty local-lis/les=0/0 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:18 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 35 pg[7.e( empty local-lis/les=0/0 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:18 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 35 pg[7.4( empty local-lis/les=0/0 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:18 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 35 pg[7.3( empty local-lis/les=0/0 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:18 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 35 pg[7.2( empty local-lis/les=0/0 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:18 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 35 pg[7.f( empty local-lis/les=0/0 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:18 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 35 pg[7.1e( empty local-lis/les=0/0 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:18 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 35 pg[7.18( empty local-lis/les=0/0 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:18 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 35 pg[7.1b( empty local-lis/les=0/0 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:18 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Dec 11 09:15:18 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Dec 11 09:15:18 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ssl}] v 0)
Dec 11 09:15:18 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2048146887' entity='client.admin' 
Dec 11 09:15:18 compute-0 systemd[1]: libpod-bb45a82d3d7fab566c64075ba02ce214c16f0f39e9d8f3154d3e4cbbe0c222b6.scope: Deactivated successfully.
Dec 11 09:15:18 compute-0 podman[87512]: 2025-12-11 09:15:18.522963536 +0000 UTC m=+0.538189830 container died bb45a82d3d7fab566c64075ba02ce214c16f0f39e9d8f3154d3e4cbbe0c222b6 (image=quay.io/ceph/ceph:v19, name=youthful_hertz, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Dec 11 09:15:18 compute-0 podman[87512]: 2025-12-11 09:15:18.621272441 +0000 UTC m=+0.636498725 container remove bb45a82d3d7fab566c64075ba02ce214c16f0f39e9d8f3154d3e4cbbe0c222b6 (image=quay.io/ceph/ceph:v19, name=youthful_hertz, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:15:18 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Dec 11 09:15:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-c4f4cd61f7e427180847f9b108040a25c88693f8237a8087332fbfb7826f0efc-merged.mount: Deactivated successfully.
Dec 11 09:15:18 compute-0 ceph-mon[74426]: 4.f scrub starts
Dec 11 09:15:18 compute-0 ceph-mon[74426]: 4.f scrub ok
Dec 11 09:15:18 compute-0 ceph-mon[74426]: mgrmap e10: compute-0.wwpcae(active, since 2m), standbys: compute-2.uiimcn
Dec 11 09:15:18 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mgr metadata", "who": "compute-2.uiimcn", "id": "compute-2.uiimcn"}]: dispatch
Dec 11 09:15:18 compute-0 ceph-mon[74426]: 5.1f scrub starts
Dec 11 09:15:18 compute-0 ceph-mon[74426]: 5.1f scrub ok
Dec 11 09:15:18 compute-0 ceph-mon[74426]: Standby manager daemon compute-1.unesvp started
Dec 11 09:15:18 compute-0 ceph-mon[74426]: from='client.? 192.168.122.100:0/2048146887' entity='client.admin' 
Dec 11 09:15:18 compute-0 sudo[87483]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:18 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e36 e36: 3 total, 2 up, 3 in
Dec 11 09:15:18 compute-0 systemd[1]: libpod-conmon-bb45a82d3d7fab566c64075ba02ce214c16f0f39e9d8f3154d3e4cbbe0c222b6.scope: Deactivated successfully.
Dec 11 09:15:18 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 2 up, 3 in
Dec 11 09:15:18 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 36 pg[7.18( empty local-lis/les=35/36 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:18 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.wwpcae(active, since 2m), standbys: compute-2.uiimcn, compute-1.unesvp
Dec 11 09:15:18 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 11 09:15:18 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 11 09:15:18 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.unesvp", "id": "compute-1.unesvp"} v 0)
Dec 11 09:15:18 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mgr metadata", "who": "compute-1.unesvp", "id": "compute-1.unesvp"}]: dispatch
Dec 11 09:15:18 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 36 pg[7.1b( empty local-lis/les=35/36 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:18 compute-0 ceph-mgr[74715]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 11 09:15:18 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 36 pg[7.1e( empty local-lis/les=35/36 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:18 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 36 pg[7.6( empty local-lis/les=35/36 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:18 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 36 pg[7.2( empty local-lis/les=35/36 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:18 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 36 pg[7.4( empty local-lis/les=35/36 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:18 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 36 pg[7.8( empty local-lis/les=35/36 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:18 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 36 pg[7.e( empty local-lis/les=35/36 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:18 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 36 pg[7.9( empty local-lis/les=35/36 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:18 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 36 pg[7.a( empty local-lis/les=35/36 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:18 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 36 pg[7.b( empty local-lis/les=35/36 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:18 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 36 pg[7.14( empty local-lis/les=35/36 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:18 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 36 pg[7.f( empty local-lis/les=35/36 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:18 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 36 pg[7.3( empty local-lis/les=35/36 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:18 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 36 pg[7.13( empty local-lis/les=35/36 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:18 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 36 pg[7.1d( empty local-lis/les=35/36 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:18 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 36 pg[7.10( empty local-lis/les=35/36 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:18 compute-0 lvm[87645]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 11 09:15:18 compute-0 lvm[87645]: VG ceph_vg0 finished
Dec 11 09:15:18 compute-0 dazzling_bhaskara[87536]: {}
Dec 11 09:15:18 compute-0 systemd[1]: libpod-06b329b58bfac80263d6cbc3e2ca093f1c050073b1ad7352b5295ea8e49a4b37.scope: Deactivated successfully.
Dec 11 09:15:18 compute-0 systemd[1]: libpod-06b329b58bfac80263d6cbc3e2ca093f1c050073b1ad7352b5295ea8e49a4b37.scope: Consumed 1.337s CPU time.
Dec 11 09:15:18 compute-0 conmon[87536]: conmon 06b329b58bfac80263d6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-06b329b58bfac80263d6cbc3e2ca093f1c050073b1ad7352b5295ea8e49a4b37.scope/container/memory.events
Dec 11 09:15:18 compute-0 podman[87505]: 2025-12-11 09:15:18.968405077 +0000 UTC m=+0.997599688 container died 06b329b58bfac80263d6cbc3e2ca093f1c050073b1ad7352b5295ea8e49a4b37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_bhaskara, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 11 09:15:18 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:15:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-b1867d113366dc753833532b044f9fdfab74745461c17475fc152bceedcdd6c5-merged.mount: Deactivated successfully.
Dec 11 09:15:19 compute-0 podman[87505]: 2025-12-11 09:15:19.024832482 +0000 UTC m=+1.054027083 container remove 06b329b58bfac80263d6cbc3e2ca093f1c050073b1ad7352b5295ea8e49a4b37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec 11 09:15:19 compute-0 systemd[1]: libpod-conmon-06b329b58bfac80263d6cbc3e2ca093f1c050073b1ad7352b5295ea8e49a4b37.scope: Deactivated successfully.
Dec 11 09:15:19 compute-0 sudo[87347]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:19 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 11 09:15:19 compute-0 sudo[87685]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-suqzsvpwzcexsolrqcsqbckwdozwpdet ; /usr/bin/python3'
Dec 11 09:15:19 compute-0 sudo[87685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:15:19 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:19 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 11 09:15:19 compute-0 python3[87687]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a -f 'name=ceph-?(.*)-mgr.*' --format \{\{\.Command\}\} --no-trunc
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:15:19 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Dec 11 09:15:19 compute-0 sudo[87685]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:19 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Dec 11 09:15:19 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:19 compute-0 sudo[87724]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brxrqonxbtvaqlntulvaginqtravjyzr ; /usr/bin/python3'
Dec 11 09:15:19 compute-0 sudo[87724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:15:19 compute-0 ceph-mgr[74715]: [progress INFO root] Completed event 0e11e9f5-49d9-4277-bf54-f0f5f4eb6115 (Global Recovery Event) in 10 seconds
Dec 11 09:15:19 compute-0 python3[87726]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-0.wwpcae/server_addr 192.168.122.100
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:15:19 compute-0 podman[87727]: 2025-12-11 09:15:19.909732863 +0000 UTC m=+0.047122278 container create b0a5796d13b21a74fdba7c4624a13a7578b21f40a6d5e6342db85d6153e49ab4 (image=quay.io/ceph/ceph:v19, name=crazy_hellman, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 11 09:15:19 compute-0 systemd[1]: Started libpod-conmon-b0a5796d13b21a74fdba7c4624a13a7578b21f40a6d5e6342db85d6153e49ab4.scope.
Dec 11 09:15:19 compute-0 podman[87727]: 2025-12-11 09:15:19.890966173 +0000 UTC m=+0.028355578 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:15:19 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:15:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f334922440ddf582dcdd2d37dfa472187d2b9678bb8b53565d57d61253e8610a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f334922440ddf582dcdd2d37dfa472187d2b9678bb8b53565d57d61253e8610a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f334922440ddf582dcdd2d37dfa472187d2b9678bb8b53565d57d61253e8610a/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:20 compute-0 podman[87727]: 2025-12-11 09:15:20.024749773 +0000 UTC m=+0.162139228 container init b0a5796d13b21a74fdba7c4624a13a7578b21f40a6d5e6342db85d6153e49ab4 (image=quay.io/ceph/ceph:v19, name=crazy_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec 11 09:15:20 compute-0 podman[87727]: 2025-12-11 09:15:20.038915117 +0000 UTC m=+0.176304522 container start b0a5796d13b21a74fdba7c4624a13a7578b21f40a6d5e6342db85d6153e49ab4 (image=quay.io/ceph/ceph:v19, name=crazy_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:15:20 compute-0 podman[87727]: 2025-12-11 09:15:20.042907544 +0000 UTC m=+0.180296989 container attach b0a5796d13b21a74fdba7c4624a13a7578b21f40a6d5e6342db85d6153e49ab4 (image=quay.io/ceph/ceph:v19, name=crazy_hellman, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:15:20 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v103: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:15:20 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Dec 11 09:15:20 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Dec 11 09:15:20 compute-0 ceph-mon[74426]: pgmap v101: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:15:20 compute-0 ceph-mon[74426]: 4.3 scrub starts
Dec 11 09:15:20 compute-0 ceph-mon[74426]: 4.3 scrub ok
Dec 11 09:15:20 compute-0 ceph-mon[74426]: osdmap e36: 3 total, 2 up, 3 in
Dec 11 09:15:20 compute-0 ceph-mon[74426]: mgrmap e11: compute-0.wwpcae(active, since 2m), standbys: compute-2.uiimcn, compute-1.unesvp
Dec 11 09:15:20 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 11 09:15:20 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mgr metadata", "who": "compute-1.unesvp", "id": "compute-1.unesvp"}]: dispatch
Dec 11 09:15:20 compute-0 ceph-mon[74426]: 6.1c scrub starts
Dec 11 09:15:20 compute-0 ceph-mon[74426]: 6.1c scrub ok
Dec 11 09:15:20 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:20 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:21 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-0.wwpcae/server_addr}] v 0)
Dec 11 09:15:21 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] scanning for idle connections..
Dec 11 09:15:21 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] cleaning up connections: []
Dec 11 09:15:21 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? ' entity='client.admin' 
Dec 11 09:15:21 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] scanning for idle connections..
Dec 11 09:15:21 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] cleaning up connections: []
Dec 11 09:15:21 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] scanning for idle connections..
Dec 11 09:15:21 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] cleaning up connections: []
Dec 11 09:15:21 compute-0 systemd[1]: libpod-b0a5796d13b21a74fdba7c4624a13a7578b21f40a6d5e6342db85d6153e49ab4.scope: Deactivated successfully.
Dec 11 09:15:21 compute-0 podman[87727]: 2025-12-11 09:15:21.177980359 +0000 UTC m=+1.315369794 container died b0a5796d13b21a74fdba7c4624a13a7578b21f40a6d5e6342db85d6153e49ab4 (image=quay.io/ceph/ceph:v19, name=crazy_hellman, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 11 09:15:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-f334922440ddf582dcdd2d37dfa472187d2b9678bb8b53565d57d61253e8610a-merged.mount: Deactivated successfully.
Dec 11 09:15:21 compute-0 podman[87727]: 2025-12-11 09:15:21.22803057 +0000 UTC m=+1.365419975 container remove b0a5796d13b21a74fdba7c4624a13a7578b21f40a6d5e6342db85d6153e49ab4 (image=quay.io/ceph/ceph:v19, name=crazy_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:15:21 compute-0 systemd[1]: libpod-conmon-b0a5796d13b21a74fdba7c4624a13a7578b21f40a6d5e6342db85d6153e49ab4.scope: Deactivated successfully.
Dec 11 09:15:21 compute-0 sudo[87724]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:21 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 4.4 deep-scrub starts
Dec 11 09:15:21 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 4.4 deep-scrub ok
Dec 11 09:15:21 compute-0 ceph-mon[74426]: 3.4 scrub starts
Dec 11 09:15:21 compute-0 ceph-mon[74426]: 3.4 scrub ok
Dec 11 09:15:21 compute-0 ceph-mon[74426]: 5.11 scrub starts
Dec 11 09:15:21 compute-0 ceph-mon[74426]: 5.11 scrub ok
Dec 11 09:15:21 compute-0 ceph-mon[74426]: pgmap v103: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:15:21 compute-0 ceph-mon[74426]: 6.1 scrub starts
Dec 11 09:15:21 compute-0 ceph-mon[74426]: 6.1 scrub ok
Dec 11 09:15:21 compute-0 ceph-mon[74426]: 6.12 scrub starts
Dec 11 09:15:21 compute-0 ceph-mon[74426]: 6.12 scrub ok
Dec 11 09:15:21 compute-0 ceph-mon[74426]: from='client.? ' entity='client.admin' 
Dec 11 09:15:21 compute-0 sudo[87803]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qikmsgwfedyuimriqoejywyesmapasci ; /usr/bin/python3'
Dec 11 09:15:22 compute-0 sudo[87803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:15:22 compute-0 python3[87805]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-1.unesvp/server_addr 192.168.122.101
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:15:22 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v104: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:15:22 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Dec 11 09:15:22 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Dec 11 09:15:22 compute-0 podman[87806]: 2025-12-11 09:15:22.26667969 +0000 UTC m=+0.083916066 container create c4dc3e792f00638e40673c3112a8e5e9b659d047e32509cfd97af588a9b1bdd5 (image=quay.io/ceph/ceph:v19, name=goofy_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:15:22 compute-0 systemd[1]: Started libpod-conmon-c4dc3e792f00638e40673c3112a8e5e9b659d047e32509cfd97af588a9b1bdd5.scope.
Dec 11 09:15:22 compute-0 podman[87806]: 2025-12-11 09:15:22.229606104 +0000 UTC m=+0.046842470 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:15:22 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:15:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72c7351fd5c61cbc75e82f21e05cccfc92aec5bf83dc33405c439121d6394244/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72c7351fd5c61cbc75e82f21e05cccfc92aec5bf83dc33405c439121d6394244/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72c7351fd5c61cbc75e82f21e05cccfc92aec5bf83dc33405c439121d6394244/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:22 compute-0 podman[87806]: 2025-12-11 09:15:22.354864611 +0000 UTC m=+0.172100967 container init c4dc3e792f00638e40673c3112a8e5e9b659d047e32509cfd97af588a9b1bdd5 (image=quay.io/ceph/ceph:v19, name=goofy_tharp, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 11 09:15:22 compute-0 podman[87806]: 2025-12-11 09:15:22.364933874 +0000 UTC m=+0.182170210 container start c4dc3e792f00638e40673c3112a8e5e9b659d047e32509cfd97af588a9b1bdd5 (image=quay.io/ceph/ceph:v19, name=goofy_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default)
Dec 11 09:15:22 compute-0 podman[87806]: 2025-12-11 09:15:22.369003863 +0000 UTC m=+0.186240199 container attach c4dc3e792f00638e40673c3112a8e5e9b659d047e32509cfd97af588a9b1bdd5 (image=quay.io/ceph/ceph:v19, name=goofy_tharp, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Dec 11 09:15:22 compute-0 ceph-mon[74426]: 4.4 deep-scrub starts
Dec 11 09:15:22 compute-0 ceph-mon[74426]: 4.4 deep-scrub ok
Dec 11 09:15:22 compute-0 ceph-mon[74426]: 5.10 scrub starts
Dec 11 09:15:22 compute-0 ceph-mon[74426]: 5.10 scrub ok
Dec 11 09:15:22 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-1.unesvp/server_addr}] v 0)
Dec 11 09:15:22 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2112496974' entity='client.admin' 
Dec 11 09:15:22 compute-0 systemd[1]: libpod-c4dc3e792f00638e40673c3112a8e5e9b659d047e32509cfd97af588a9b1bdd5.scope: Deactivated successfully.
Dec 11 09:15:22 compute-0 podman[87806]: 2025-12-11 09:15:22.887179352 +0000 UTC m=+0.704415698 container died c4dc3e792f00638e40673c3112a8e5e9b659d047e32509cfd97af588a9b1bdd5 (image=quay.io/ceph/ceph:v19, name=goofy_tharp, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Dec 11 09:15:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-72c7351fd5c61cbc75e82f21e05cccfc92aec5bf83dc33405c439121d6394244-merged.mount: Deactivated successfully.
Dec 11 09:15:22 compute-0 podman[87806]: 2025-12-11 09:15:22.934747443 +0000 UTC m=+0.751983799 container remove c4dc3e792f00638e40673c3112a8e5e9b659d047e32509cfd97af588a9b1bdd5 (image=quay.io/ceph/ceph:v19, name=goofy_tharp, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec 11 09:15:22 compute-0 systemd[1]: libpod-conmon-c4dc3e792f00638e40673c3112a8e5e9b659d047e32509cfd97af588a9b1bdd5.scope: Deactivated successfully.
Dec 11 09:15:22 compute-0 sudo[87803]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:23 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Dec 11 09:15:23 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Dec 11 09:15:23 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Dec 11 09:15:23 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Dec 11 09:15:23 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 11 09:15:23 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:15:23 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-2
Dec 11 09:15:23 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-2
Dec 11 09:15:23 compute-0 ceph-mon[74426]: pgmap v104: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:15:23 compute-0 ceph-mon[74426]: 5.5 scrub starts
Dec 11 09:15:23 compute-0 ceph-mon[74426]: 5.5 scrub ok
Dec 11 09:15:23 compute-0 ceph-mon[74426]: 3.14 scrub starts
Dec 11 09:15:23 compute-0 ceph-mon[74426]: 3.14 scrub ok
Dec 11 09:15:23 compute-0 ceph-mon[74426]: from='client.? 192.168.122.100:0/2112496974' entity='client.admin' 
Dec 11 09:15:23 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Dec 11 09:15:23 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:15:23 compute-0 sudo[87880]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbdygipfzgraqrguahjpwpodfexzeups ; /usr/bin/python3'
Dec 11 09:15:23 compute-0 sudo[87880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:15:23 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:15:24 compute-0 python3[87882]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-2.uiimcn/server_addr 192.168.122.102
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:15:24 compute-0 podman[87883]: 2025-12-11 09:15:24.105300724 +0000 UTC m=+0.080375703 container create c8f733c98ac7944a171bb042be5a4a9bb8763f5228846cd36f67997e7b98bbc6 (image=quay.io/ceph/ceph:v19, name=angry_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:15:24 compute-0 systemd[1]: Started libpod-conmon-c8f733c98ac7944a171bb042be5a4a9bb8763f5228846cd36f67997e7b98bbc6.scope.
Dec 11 09:15:24 compute-0 podman[87883]: 2025-12-11 09:15:24.071494712 +0000 UTC m=+0.046569761 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:15:24 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:15:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45aadbec3d6c856453a605c763676648efe31daf99b0de1a6785381ef6449ac1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45aadbec3d6c856453a605c763676648efe31daf99b0de1a6785381ef6449ac1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45aadbec3d6c856453a605c763676648efe31daf99b0de1a6785381ef6449ac1/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:24 compute-0 podman[87883]: 2025-12-11 09:15:24.18271818 +0000 UTC m=+0.157793189 container init c8f733c98ac7944a171bb042be5a4a9bb8763f5228846cd36f67997e7b98bbc6 (image=quay.io/ceph/ceph:v19, name=angry_raman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:15:24 compute-0 podman[87883]: 2025-12-11 09:15:24.192860175 +0000 UTC m=+0.167935134 container start c8f733c98ac7944a171bb042be5a4a9bb8763f5228846cd36f67997e7b98bbc6 (image=quay.io/ceph/ceph:v19, name=angry_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:15:24 compute-0 podman[87883]: 2025-12-11 09:15:24.196027237 +0000 UTC m=+0.171102246 container attach c8f733c98ac7944a171bb042be5a4a9bb8763f5228846cd36f67997e7b98bbc6 (image=quay.io/ceph/ceph:v19, name=angry_raman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:15:24 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v105: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:15:24 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Dec 11 09:15:24 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Dec 11 09:15:24 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-2.uiimcn/server_addr}] v 0)
Dec 11 09:15:24 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4034639147' entity='client.admin' 
Dec 11 09:15:24 compute-0 systemd[1]: libpod-c8f733c98ac7944a171bb042be5a4a9bb8763f5228846cd36f67997e7b98bbc6.scope: Deactivated successfully.
Dec 11 09:15:24 compute-0 ceph-mon[74426]: 6.6 scrub starts
Dec 11 09:15:24 compute-0 ceph-mon[74426]: 6.6 scrub ok
Dec 11 09:15:24 compute-0 ceph-mon[74426]: Deploying daemon osd.2 on compute-2
Dec 11 09:15:24 compute-0 ceph-mon[74426]: 6.17 deep-scrub starts
Dec 11 09:15:24 compute-0 ceph-mon[74426]: 6.17 deep-scrub ok
Dec 11 09:15:24 compute-0 ceph-mon[74426]: pgmap v105: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:15:24 compute-0 ceph-mon[74426]: from='client.? 192.168.122.100:0/4034639147' entity='client.admin' 
Dec 11 09:15:24 compute-0 podman[87923]: 2025-12-11 09:15:24.659276857 +0000 UTC m=+0.029763613 container died c8f733c98ac7944a171bb042be5a4a9bb8763f5228846cd36f67997e7b98bbc6 (image=quay.io/ceph/ceph:v19, name=angry_raman, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:15:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-45aadbec3d6c856453a605c763676648efe31daf99b0de1a6785381ef6449ac1-merged.mount: Deactivated successfully.
Dec 11 09:15:24 compute-0 podman[87923]: 2025-12-11 09:15:24.706925721 +0000 UTC m=+0.077412477 container remove c8f733c98ac7944a171bb042be5a4a9bb8763f5228846cd36f67997e7b98bbc6 (image=quay.io/ceph/ceph:v19, name=angry_raman, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:15:24 compute-0 systemd[1]: libpod-conmon-c8f733c98ac7944a171bb042be5a4a9bb8763f5228846cd36f67997e7b98bbc6.scope: Deactivated successfully.
Dec 11 09:15:24 compute-0 sudo[87880]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:24 compute-0 ceph-mgr[74715]: [progress INFO root] Writing back 13 completed events
Dec 11 09:15:24 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 11 09:15:24 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:24 compute-0 sudo[87959]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcewtmnttztnzmnebyltdzzvxuxnqivk ; /usr/bin/python3'
Dec 11 09:15:24 compute-0 sudo[87959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:15:25 compute-0 python3[87961]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module disable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:15:25 compute-0 podman[87962]: 2025-12-11 09:15:25.171949219 +0000 UTC m=+0.044464453 container create 0549b918a4680d6868ae0ce51b9e02528ea2cfef190ff12549c58f5f66fc3d7c (image=quay.io/ceph/ceph:v19, name=relaxed_pike, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 11 09:15:25 compute-0 systemd[1]: Started libpod-conmon-0549b918a4680d6868ae0ce51b9e02528ea2cfef190ff12549c58f5f66fc3d7c.scope.
Dec 11 09:15:25 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:15:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bd000c5b6e44282f75fa30e558f7bdc935841d61ca2ea4fbfcaa5f5943f16fe/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bd000c5b6e44282f75fa30e558f7bdc935841d61ca2ea4fbfcaa5f5943f16fe/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bd000c5b6e44282f75fa30e558f7bdc935841d61ca2ea4fbfcaa5f5943f16fe/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:25 compute-0 podman[87962]: 2025-12-11 09:15:25.248106286 +0000 UTC m=+0.120621540 container init 0549b918a4680d6868ae0ce51b9e02528ea2cfef190ff12549c58f5f66fc3d7c (image=quay.io/ceph/ceph:v19, name=relaxed_pike, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:15:25 compute-0 podman[87962]: 2025-12-11 09:15:25.153446198 +0000 UTC m=+0.025961452 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:15:25 compute-0 podman[87962]: 2025-12-11 09:15:25.253946933 +0000 UTC m=+0.126462167 container start 0549b918a4680d6868ae0ce51b9e02528ea2cfef190ff12549c58f5f66fc3d7c (image=quay.io/ceph/ceph:v19, name=relaxed_pike, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:15:25 compute-0 podman[87962]: 2025-12-11 09:15:25.257119395 +0000 UTC m=+0.129634669 container attach 0549b918a4680d6868ae0ce51b9e02528ea2cfef190ff12549c58f5f66fc3d7c (image=quay.io/ceph/ceph:v19, name=relaxed_pike, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:15:25 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Dec 11 09:15:25 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Dec 11 09:15:25 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module disable", "module": "dashboard"} v 0)
Dec 11 09:15:25 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3543000902' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Dec 11 09:15:25 compute-0 ceph-mon[74426]: 3.2 scrub starts
Dec 11 09:15:25 compute-0 ceph-mon[74426]: 3.2 scrub ok
Dec 11 09:15:25 compute-0 ceph-mon[74426]: 4.15 deep-scrub starts
Dec 11 09:15:25 compute-0 ceph-mon[74426]: 4.15 deep-scrub ok
Dec 11 09:15:25 compute-0 ceph-mon[74426]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:25 compute-0 ceph-mon[74426]: from='client.? 192.168.122.100:0/3543000902' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Dec 11 09:15:25 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3543000902' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Dec 11 09:15:25 compute-0 relaxed_pike[87977]: module 'dashboard' is already disabled
Dec 11 09:15:25 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : mgrmap e12: compute-0.wwpcae(active, since 2m), standbys: compute-2.uiimcn, compute-1.unesvp
Dec 11 09:15:25 compute-0 systemd[1]: libpod-0549b918a4680d6868ae0ce51b9e02528ea2cfef190ff12549c58f5f66fc3d7c.scope: Deactivated successfully.
Dec 11 09:15:25 compute-0 podman[87962]: 2025-12-11 09:15:25.934992982 +0000 UTC m=+0.807508216 container died 0549b918a4680d6868ae0ce51b9e02528ea2cfef190ff12549c58f5f66fc3d7c (image=quay.io/ceph/ceph:v19, name=relaxed_pike, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 11 09:15:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-5bd000c5b6e44282f75fa30e558f7bdc935841d61ca2ea4fbfcaa5f5943f16fe-merged.mount: Deactivated successfully.
Dec 11 09:15:25 compute-0 podman[87962]: 2025-12-11 09:15:25.968787403 +0000 UTC m=+0.841302637 container remove 0549b918a4680d6868ae0ce51b9e02528ea2cfef190ff12549c58f5f66fc3d7c (image=quay.io/ceph/ceph:v19, name=relaxed_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec 11 09:15:25 compute-0 systemd[1]: libpod-conmon-0549b918a4680d6868ae0ce51b9e02528ea2cfef190ff12549c58f5f66fc3d7c.scope: Deactivated successfully.
Dec 11 09:15:25 compute-0 sudo[87959]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:26 compute-0 sudo[88038]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bweziagrlrcrnishdylaxdcoaakgnhxx ; /usr/bin/python3'
Dec 11 09:15:26 compute-0 sudo[88038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:15:26 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v106: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:15:26 compute-0 python3[88040]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module enable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:15:26 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Dec 11 09:15:26 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Dec 11 09:15:26 compute-0 podman[88041]: 2025-12-11 09:15:26.362183929 +0000 UTC m=+0.041894601 container create a778269a0cbbc767a40866ab9d977c3e65702e87b50589305fbdcb869a97b45a (image=quay.io/ceph/ceph:v19, name=xenodochial_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec 11 09:15:26 compute-0 systemd[1]: Started libpod-conmon-a778269a0cbbc767a40866ab9d977c3e65702e87b50589305fbdcb869a97b45a.scope.
Dec 11 09:15:26 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:15:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cd08c63b0b8dde77facac18c777d40b37073ad48529e3374551ffd8f7a9e1ab/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cd08c63b0b8dde77facac18c777d40b37073ad48529e3374551ffd8f7a9e1ab/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cd08c63b0b8dde77facac18c777d40b37073ad48529e3374551ffd8f7a9e1ab/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:26 compute-0 podman[88041]: 2025-12-11 09:15:26.34691621 +0000 UTC m=+0.026626902 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:15:26 compute-0 podman[88041]: 2025-12-11 09:15:26.447689414 +0000 UTC m=+0.127400116 container init a778269a0cbbc767a40866ab9d977c3e65702e87b50589305fbdcb869a97b45a (image=quay.io/ceph/ceph:v19, name=xenodochial_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 11 09:15:26 compute-0 podman[88041]: 2025-12-11 09:15:26.454202343 +0000 UTC m=+0.133913005 container start a778269a0cbbc767a40866ab9d977c3e65702e87b50589305fbdcb869a97b45a (image=quay.io/ceph/ceph:v19, name=xenodochial_noyce, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 11 09:15:26 compute-0 podman[88041]: 2025-12-11 09:15:26.458533812 +0000 UTC m=+0.138244504 container attach a778269a0cbbc767a40866ab9d977c3e65702e87b50589305fbdcb869a97b45a (image=quay.io/ceph/ceph:v19, name=xenodochial_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:15:26 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "dashboard"} v 0)
Dec 11 09:15:26 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1987057205' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Dec 11 09:15:26 compute-0 ceph-mon[74426]: 4.6 scrub starts
Dec 11 09:15:26 compute-0 ceph-mon[74426]: 4.6 scrub ok
Dec 11 09:15:26 compute-0 ceph-mon[74426]: 5.15 scrub starts
Dec 11 09:15:26 compute-0 ceph-mon[74426]: 5.15 scrub ok
Dec 11 09:15:26 compute-0 ceph-mon[74426]: from='client.? 192.168.122.100:0/3543000902' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Dec 11 09:15:26 compute-0 ceph-mon[74426]: mgrmap e12: compute-0.wwpcae(active, since 2m), standbys: compute-2.uiimcn, compute-1.unesvp
Dec 11 09:15:26 compute-0 ceph-mon[74426]: pgmap v106: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:15:26 compute-0 ceph-mon[74426]: from='client.? 192.168.122.100:0/1987057205' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Dec 11 09:15:27 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1987057205' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Dec 11 09:15:27 compute-0 ceph-mgr[74715]: mgr handle_mgr_map respawning because set of enabled modules changed!
Dec 11 09:15:27 compute-0 ceph-mgr[74715]: mgr respawn  e: '/usr/bin/ceph-mgr'
Dec 11 09:15:27 compute-0 ceph-mgr[74715]: mgr respawn  0: '/usr/bin/ceph-mgr'
Dec 11 09:15:27 compute-0 ceph-mgr[74715]: mgr respawn  1: '-n'
Dec 11 09:15:27 compute-0 ceph-mgr[74715]: mgr respawn  2: 'mgr.compute-0.wwpcae'
Dec 11 09:15:27 compute-0 ceph-mgr[74715]: mgr respawn  3: '-f'
Dec 11 09:15:27 compute-0 ceph-mgr[74715]: mgr respawn  4: '--setuser'
Dec 11 09:15:27 compute-0 ceph-mgr[74715]: mgr respawn  5: 'ceph'
Dec 11 09:15:27 compute-0 ceph-mgr[74715]: mgr respawn  6: '--setgroup'
Dec 11 09:15:27 compute-0 ceph-mgr[74715]: mgr respawn  7: 'ceph'
Dec 11 09:15:27 compute-0 ceph-mgr[74715]: mgr respawn  8: '--default-log-to-file=false'
Dec 11 09:15:27 compute-0 ceph-mgr[74715]: mgr respawn  9: '--default-log-to-journald=true'
Dec 11 09:15:27 compute-0 ceph-mgr[74715]: mgr respawn  10: '--default-log-to-stderr=false'
Dec 11 09:15:27 compute-0 ceph-mgr[74715]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Dec 11 09:15:27 compute-0 ceph-mgr[74715]: mgr respawn  exe_path /proc/self/exe
Dec 11 09:15:27 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : mgrmap e13: compute-0.wwpcae(active, since 2m), standbys: compute-2.uiimcn, compute-1.unesvp
Dec 11 09:15:27 compute-0 systemd[1]: libpod-a778269a0cbbc767a40866ab9d977c3e65702e87b50589305fbdcb869a97b45a.scope: Deactivated successfully.
Dec 11 09:15:27 compute-0 podman[88041]: 2025-12-11 09:15:27.275724197 +0000 UTC m=+0.955434869 container died a778269a0cbbc767a40866ab9d977c3e65702e87b50589305fbdcb869a97b45a (image=quay.io/ceph/ceph:v19, name=xenodochial_noyce, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:15:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-1cd08c63b0b8dde77facac18c777d40b37073ad48529e3374551ffd8f7a9e1ab-merged.mount: Deactivated successfully.
Dec 11 09:15:27 compute-0 podman[88041]: 2025-12-11 09:15:27.314728774 +0000 UTC m=+0.994439446 container remove a778269a0cbbc767a40866ab9d977c3e65702e87b50589305fbdcb869a97b45a (image=quay.io/ceph/ceph:v19, name=xenodochial_noyce, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 11 09:15:27 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Dec 11 09:15:27 compute-0 systemd[1]: libpod-conmon-a778269a0cbbc767a40866ab9d977c3e65702e87b50589305fbdcb869a97b45a.scope: Deactivated successfully.
Dec 11 09:15:27 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Dec 11 09:15:27 compute-0 sudo[88038]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:27 compute-0 sshd-session[76108]: Connection closed by 192.168.122.100 port 57132
Dec 11 09:15:27 compute-0 sshd-session[76052]: Connection closed by 192.168.122.100 port 34322
Dec 11 09:15:27 compute-0 sshd-session[75907]: Connection closed by 192.168.122.100 port 34278
Dec 11 09:15:27 compute-0 sshd-session[76079]: Connection closed by 192.168.122.100 port 57116
Dec 11 09:15:27 compute-0 sshd-session[76023]: Connection closed by 192.168.122.100 port 34320
Dec 11 09:15:27 compute-0 sshd-session[75994]: Connection closed by 192.168.122.100 port 34312
Dec 11 09:15:27 compute-0 sshd-session[75820]: Connection closed by 192.168.122.100 port 34256
Dec 11 09:15:27 compute-0 sshd-session[75878]: Connection closed by 192.168.122.100 port 34262
Dec 11 09:15:27 compute-0 sshd-session[75965]: Connection closed by 192.168.122.100 port 34298
Dec 11 09:15:27 compute-0 sshd-session[75819]: Connection closed by 192.168.122.100 port 34240
Dec 11 09:15:27 compute-0 sshd-session[75936]: Connection closed by 192.168.122.100 port 34292
Dec 11 09:15:27 compute-0 sshd-session[75849]: Connection closed by 192.168.122.100 port 34258
Dec 11 09:15:27 compute-0 sshd-session[76076]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 11 09:15:27 compute-0 sshd-session[75796]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 11 09:15:27 compute-0 systemd[1]: session-21.scope: Deactivated successfully.
Dec 11 09:15:27 compute-0 sshd-session[76020]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 11 09:15:27 compute-0 sshd-session[75846]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 11 09:15:27 compute-0 sshd-session[75875]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 11 09:15:27 compute-0 sshd-session[76049]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 11 09:15:27 compute-0 sshd-session[75991]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 11 09:15:27 compute-0 sshd-session[76105]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 11 09:15:27 compute-0 sshd-session[75904]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 11 09:15:27 compute-0 systemd-logind[792]: Session 21 logged out. Waiting for processes to exit.
Dec 11 09:15:27 compute-0 systemd[1]: session-33.scope: Deactivated successfully.
Dec 11 09:15:27 compute-0 systemd[1]: session-33.scope: Consumed 22.775s CPU time.
Dec 11 09:15:27 compute-0 systemd[1]: session-32.scope: Deactivated successfully.
Dec 11 09:15:27 compute-0 systemd[1]: session-24.scope: Deactivated successfully.
Dec 11 09:15:27 compute-0 systemd[1]: session-25.scope: Deactivated successfully.
Dec 11 09:15:27 compute-0 systemd[1]: session-29.scope: Deactivated successfully.
Dec 11 09:15:27 compute-0 systemd[1]: session-30.scope: Deactivated successfully.
Dec 11 09:15:27 compute-0 systemd-logind[792]: Session 29 logged out. Waiting for processes to exit.
Dec 11 09:15:27 compute-0 systemd[1]: session-31.scope: Deactivated successfully.
Dec 11 09:15:27 compute-0 systemd[1]: session-26.scope: Deactivated successfully.
Dec 11 09:15:27 compute-0 systemd-logind[792]: Session 32 logged out. Waiting for processes to exit.
Dec 11 09:15:27 compute-0 systemd-logind[792]: Session 25 logged out. Waiting for processes to exit.
Dec 11 09:15:27 compute-0 systemd-logind[792]: Session 24 logged out. Waiting for processes to exit.
Dec 11 09:15:27 compute-0 systemd-logind[792]: Session 33 logged out. Waiting for processes to exit.
Dec 11 09:15:27 compute-0 systemd-logind[792]: Session 31 logged out. Waiting for processes to exit.
Dec 11 09:15:27 compute-0 systemd-logind[792]: Session 26 logged out. Waiting for processes to exit.
Dec 11 09:15:27 compute-0 systemd-logind[792]: Session 30 logged out. Waiting for processes to exit.
Dec 11 09:15:27 compute-0 systemd-logind[792]: Removed session 21.
Dec 11 09:15:27 compute-0 sshd-session[75809]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 11 09:15:27 compute-0 systemd[1]: session-23.scope: Deactivated successfully.
Dec 11 09:15:27 compute-0 sshd-session[75933]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 11 09:15:27 compute-0 sshd-session[75962]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 11 09:15:27 compute-0 systemd-logind[792]: Session 23 logged out. Waiting for processes to exit.
Dec 11 09:15:27 compute-0 systemd-logind[792]: Removed session 33.
Dec 11 09:15:27 compute-0 systemd[1]: session-27.scope: Deactivated successfully.
Dec 11 09:15:27 compute-0 systemd-logind[792]: Session 27 logged out. Waiting for processes to exit.
Dec 11 09:15:27 compute-0 systemd[1]: session-28.scope: Deactivated successfully.
Dec 11 09:15:27 compute-0 systemd-logind[792]: Session 28 logged out. Waiting for processes to exit.
Dec 11 09:15:27 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: ignoring --setuser ceph since I am not root
Dec 11 09:15:27 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: ignoring --setgroup ceph since I am not root
Dec 11 09:15:27 compute-0 systemd-logind[792]: Removed session 32.
Dec 11 09:15:27 compute-0 systemd-logind[792]: Removed session 24.
Dec 11 09:15:27 compute-0 systemd-logind[792]: Removed session 25.
Dec 11 09:15:27 compute-0 systemd-logind[792]: Removed session 29.
Dec 11 09:15:27 compute-0 systemd-logind[792]: Removed session 30.
Dec 11 09:15:27 compute-0 ceph-mgr[74715]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec 11 09:15:27 compute-0 systemd-logind[792]: Removed session 31.
Dec 11 09:15:27 compute-0 ceph-mgr[74715]: pidfile_write: ignore empty --pid-file
Dec 11 09:15:27 compute-0 systemd-logind[792]: Removed session 26.
Dec 11 09:15:27 compute-0 systemd-logind[792]: Removed session 23.
Dec 11 09:15:27 compute-0 systemd-logind[792]: Removed session 27.
Dec 11 09:15:27 compute-0 systemd-logind[792]: Removed session 28.
Dec 11 09:15:27 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'alerts'
Dec 11 09:15:27 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:27.542+0000 7fdde5d1d140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 11 09:15:27 compute-0 ceph-mgr[74715]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 11 09:15:27 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'balancer'
Dec 11 09:15:27 compute-0 sudo[88135]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-guduezdktrfpxtpccpwcwjurtubhhavl ; /usr/bin/python3'
Dec 11 09:15:27 compute-0 sudo[88135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:15:27 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:27.635+0000 7fdde5d1d140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 11 09:15:27 compute-0 ceph-mgr[74715]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 11 09:15:27 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'cephadm'
Dec 11 09:15:27 compute-0 python3[88137]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-username admin _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:15:27 compute-0 podman[88138]: 2025-12-11 09:15:27.883474831 +0000 UTC m=+0.050773526 container create f77f2f0622c0bf336494b240b3beb4ce7cd0038e46148ae49a77f561da368626 (image=quay.io/ceph/ceph:v19, name=adoring_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325)
Dec 11 09:15:27 compute-0 systemd[1]: Started libpod-conmon-f77f2f0622c0bf336494b240b3beb4ce7cd0038e46148ae49a77f561da368626.scope.
Dec 11 09:15:27 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:15:27 compute-0 podman[88138]: 2025-12-11 09:15:27.861370653 +0000 UTC m=+0.028669368 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:15:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbc4b1161a971ba641bceaee871148b7d20afda9983e938b38bf6ff1dd188d9a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbc4b1161a971ba641bceaee871148b7d20afda9983e938b38bf6ff1dd188d9a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbc4b1161a971ba641bceaee871148b7d20afda9983e938b38bf6ff1dd188d9a/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:27 compute-0 ceph-mon[74426]: 3.1 scrub starts
Dec 11 09:15:27 compute-0 ceph-mon[74426]: 3.1 scrub ok
Dec 11 09:15:27 compute-0 ceph-mon[74426]: 3.13 scrub starts
Dec 11 09:15:27 compute-0 ceph-mon[74426]: 3.13 scrub ok
Dec 11 09:15:27 compute-0 ceph-mon[74426]: from='client.? 192.168.122.100:0/1987057205' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Dec 11 09:15:27 compute-0 ceph-mon[74426]: mgrmap e13: compute-0.wwpcae(active, since 2m), standbys: compute-2.uiimcn, compute-1.unesvp
Dec 11 09:15:27 compute-0 ceph-mon[74426]: 6.4 scrub starts
Dec 11 09:15:27 compute-0 ceph-mon[74426]: 6.4 scrub ok
Dec 11 09:15:28 compute-0 podman[88138]: 2025-12-11 09:15:28.004795041 +0000 UTC m=+0.172093796 container init f77f2f0622c0bf336494b240b3beb4ce7cd0038e46148ae49a77f561da368626 (image=quay.io/ceph/ceph:v19, name=adoring_williams, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:15:28 compute-0 podman[88138]: 2025-12-11 09:15:28.014579525 +0000 UTC m=+0.181878240 container start f77f2f0622c0bf336494b240b3beb4ce7cd0038e46148ae49a77f561da368626 (image=quay.io/ceph/ceph:v19, name=adoring_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:15:28 compute-0 podman[88138]: 2025-12-11 09:15:28.019047188 +0000 UTC m=+0.186346023 container attach f77f2f0622c0bf336494b240b3beb4ce7cd0038e46148ae49a77f561da368626 (image=quay.io/ceph/ceph:v19, name=adoring_williams, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 11 09:15:28 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Dec 11 09:15:28 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Dec 11 09:15:28 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'crash'
Dec 11 09:15:28 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:28.592+0000 7fdde5d1d140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 11 09:15:28 compute-0 ceph-mgr[74715]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 11 09:15:28 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'dashboard'
Dec 11 09:15:28 compute-0 ceph-mon[74426]: 4.13 scrub starts
Dec 11 09:15:28 compute-0 ceph-mon[74426]: 4.13 scrub ok
Dec 11 09:15:28 compute-0 ceph-mon[74426]: 4.2 scrub starts
Dec 11 09:15:28 compute-0 ceph-mon[74426]: 4.2 scrub ok
Dec 11 09:15:28 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:15:29 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'devicehealth'
Dec 11 09:15:29 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 6.0 scrub starts
Dec 11 09:15:29 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:29.327+0000 7fdde5d1d140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 11 09:15:29 compute-0 ceph-mgr[74715]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 11 09:15:29 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'diskprediction_local'
Dec 11 09:15:29 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 6.0 scrub ok
Dec 11 09:15:29 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 11 09:15:29 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 11 09:15:29 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]:   from numpy import show_config as show_numpy_config
Dec 11 09:15:29 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:29.569+0000 7fdde5d1d140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 11 09:15:29 compute-0 ceph-mgr[74715]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 11 09:15:29 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'influx'
Dec 11 09:15:29 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:29.659+0000 7fdde5d1d140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 11 09:15:29 compute-0 ceph-mgr[74715]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 11 09:15:29 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'insights'
Dec 11 09:15:29 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'iostat'
Dec 11 09:15:29 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:29.854+0000 7fdde5d1d140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 11 09:15:29 compute-0 ceph-mgr[74715]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 11 09:15:29 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'k8sevents'
Dec 11 09:15:29 compute-0 ceph-mon[74426]: 3.10 deep-scrub starts
Dec 11 09:15:29 compute-0 ceph-mon[74426]: 3.10 deep-scrub ok
Dec 11 09:15:29 compute-0 ceph-mon[74426]: 6.0 scrub starts
Dec 11 09:15:29 compute-0 ceph-mon[74426]: 6.0 scrub ok
Dec 11 09:15:30 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Dec 11 09:15:30 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Dec 11 09:15:30 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'localpool'
Dec 11 09:15:30 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'mds_autoscaler'
Dec 11 09:15:30 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'mirroring'
Dec 11 09:15:30 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'nfs'
Dec 11 09:15:31 compute-0 ceph-mon[74426]: 6.15 scrub starts
Dec 11 09:15:31 compute-0 ceph-mon[74426]: 6.15 scrub ok
Dec 11 09:15:31 compute-0 ceph-mon[74426]: 5.3 scrub starts
Dec 11 09:15:31 compute-0 ceph-mon[74426]: 5.3 scrub ok
Dec 11 09:15:31 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:31.059+0000 7fdde5d1d140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 11 09:15:31 compute-0 ceph-mgr[74715]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 11 09:15:31 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'orchestrator'
Dec 11 09:15:31 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0)
Dec 11 09:15:31 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Dec 11 09:15:31 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:31.315+0000 7fdde5d1d140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 11 09:15:31 compute-0 ceph-mgr[74715]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 11 09:15:31 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'osd_perf_query'
Dec 11 09:15:31 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Dec 11 09:15:31 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Dec 11 09:15:31 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:31.399+0000 7fdde5d1d140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 11 09:15:31 compute-0 ceph-mgr[74715]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 11 09:15:31 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'osd_support'
Dec 11 09:15:31 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:31.482+0000 7fdde5d1d140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 11 09:15:31 compute-0 ceph-mgr[74715]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 11 09:15:31 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'pg_autoscaler'
Dec 11 09:15:31 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:31.576+0000 7fdde5d1d140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 11 09:15:31 compute-0 ceph-mgr[74715]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 11 09:15:31 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'progress'
Dec 11 09:15:31 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:31.663+0000 7fdde5d1d140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 11 09:15:31 compute-0 ceph-mgr[74715]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 11 09:15:31 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'prometheus'
Dec 11 09:15:32 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Dec 11 09:15:32 compute-0 ceph-mon[74426]: 5.16 scrub starts
Dec 11 09:15:32 compute-0 ceph-mon[74426]: 5.16 scrub ok
Dec 11 09:15:32 compute-0 ceph-mon[74426]: from='osd.2 [v2:192.168.122.102:6800/1051349850,v1:192.168.122.102:6801/1051349850]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Dec 11 09:15:32 compute-0 ceph-mon[74426]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Dec 11 09:15:32 compute-0 ceph-mon[74426]: 3.6 scrub starts
Dec 11 09:15:32 compute-0 ceph-mon[74426]: 3.6 scrub ok
Dec 11 09:15:32 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Dec 11 09:15:32 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e37 e37: 3 total, 2 up, 3 in
Dec 11 09:15:32 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 2 up, 3 in
Dec 11 09:15:32 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]} v 0)
Dec 11 09:15:32 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Dec 11 09:15:32 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e37 create-or-move crush item name 'osd.2' initial_weight 0.0195 at location {host=compute-2,root=default}
Dec 11 09:15:32 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:32.075+0000 7fdde5d1d140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 11 09:15:32 compute-0 ceph-mgr[74715]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 11 09:15:32 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'rbd_support'
Dec 11 09:15:32 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:32.184+0000 7fdde5d1d140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 11 09:15:32 compute-0 ceph-mgr[74715]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 11 09:15:32 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'restful'
Dec 11 09:15:32 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 5.0 deep-scrub starts
Dec 11 09:15:32 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 5.0 deep-scrub ok
Dec 11 09:15:32 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'rgw'
Dec 11 09:15:32 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:32.682+0000 7fdde5d1d140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 11 09:15:32 compute-0 ceph-mgr[74715]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 11 09:15:32 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'rook'
Dec 11 09:15:33 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Dec 11 09:15:33 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Dec 11 09:15:33 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e38 e38: 3 total, 2 up, 3 in
Dec 11 09:15:33 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 2 up, 3 in
Dec 11 09:15:33 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 38 pg[4.19( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=38 pruub=11.103866577s) [] r=-1 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 active pruub 85.281417847s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:33 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 38 pg[4.1c( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=38 pruub=11.108639717s) [] r=-1 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 active pruub 85.286239624s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:33 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 38 pg[4.19( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=38 pruub=11.103866577s) [] r=-1 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.281417847s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:33 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 38 pg[4.1c( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=38 pruub=11.108639717s) [] r=-1 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.286239624s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:33 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 38 pg[6.1b( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=38 pruub=13.169590950s) [] r=-1 lpr=38 pi=[30,38)/1 crt=0'0 mlcod 0'0 active pruub 87.347328186s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:33 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 38 pg[6.1b( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=38 pruub=13.169590950s) [] r=-1 lpr=38 pi=[30,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.347328186s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:33 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 38 pg[2.1b( empty local-lis/les=32/33 n=0 ec=26/13 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=8.805236816s) [] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 active pruub 82.983154297s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:33 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 38 pg[4.1d( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=38 pruub=11.103394508s) [] r=-1 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 active pruub 85.281341553s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:33 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 38 pg[2.1b( empty local-lis/les=32/33 n=0 ec=26/13 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=8.805236816s) [] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.983154297s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:33 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 38 pg[4.1d( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=38 pruub=11.103394508s) [] r=-1 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.281341553s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:33 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 38 pg[4.3( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=38 pruub=11.102982521s) [] r=-1 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 active pruub 85.281112671s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:33 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 38 pg[3.8( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=38 pruub=11.103121758s) [] r=-1 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 active pruub 85.281219482s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:33 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 38 pg[4.3( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=38 pruub=11.102982521s) [] r=-1 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.281112671s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:33 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 38 pg[3.8( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=38 pruub=11.103121758s) [] r=-1 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.281219482s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:33 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 38 pg[4.2( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=38 pruub=11.102354050s) [] r=-1 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 active pruub 85.280838013s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:33 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 38 pg[6.1( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=38 pruub=13.169154167s) [] r=-1 lpr=38 pi=[30,38)/1 crt=0'0 mlcod 0'0 active pruub 87.347648621s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:33 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 38 pg[4.2( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=38 pruub=11.102354050s) [] r=-1 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.280838013s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:33 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 38 pg[5.0( empty local-lis/les=30/31 n=0 ec=19/19 lis/c=30/30 les/c/f=31/31/0 sis=38 pruub=13.169591904s) [] r=-1 lpr=38 pi=[30,38)/1 crt=0'0 mlcod 0'0 active pruub 87.348152161s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:33 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 38 pg[5.0( empty local-lis/les=30/31 n=0 ec=19/19 lis/c=30/30 les/c/f=31/31/0 sis=38 pruub=13.169591904s) [] r=-1 lpr=38 pi=[30,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.348152161s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:33 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 38 pg[6.1( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=38 pruub=13.169154167s) [] r=-1 lpr=38 pi=[30,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.347648621s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:33 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 38 pg[4.6( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=38 pruub=11.102509499s) [] r=-1 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 active pruub 85.281120300s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:33 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 38 pg[4.6( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=38 pruub=11.102509499s) [] r=-1 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.281120300s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:33 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 38 pg[3.0( empty local-lis/les=28/29 n=0 ec=15/15 lis/c=28/28 les/c/f=29/29/0 sis=38 pruub=11.101738930s) [] r=-1 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 active pruub 85.280487061s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:33 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 38 pg[3.0( empty local-lis/les=28/29 n=0 ec=15/15 lis/c=28/28 les/c/f=29/29/0 sis=38 pruub=11.101738930s) [] r=-1 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.280487061s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:33 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 38 pg[3.1b( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=38 pruub=11.102493286s) [] r=-1 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 active pruub 85.281326294s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:33 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 38 pg[3.1b( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=38 pruub=11.102493286s) [] r=-1 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.281326294s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:33 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 38 pg[5.d( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=38 pruub=13.170082092s) [] r=-1 lpr=38 pi=[30,38)/1 crt=0'0 mlcod 0'0 active pruub 87.348991394s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:33 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 38 pg[5.d( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=38 pruub=13.170082092s) [] r=-1 lpr=38 pi=[30,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.348991394s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:33 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 38 pg[2.d( empty local-lis/les=32/33 n=0 ec=26/13 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=8.804262161s) [] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 active pruub 82.983200073s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:33 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 38 pg[2.d( empty local-lis/les=32/33 n=0 ec=26/13 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=8.804262161s) [] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.983200073s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:33 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 38 pg[2.a( empty local-lis/les=32/33 n=0 ec=26/13 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=8.804364204s) [] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 active pruub 82.983421326s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:33 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 38 pg[2.a( empty local-lis/les=32/33 n=0 ec=26/13 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=8.804364204s) [] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.983421326s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:33 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 38 pg[5.b( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=38 pruub=13.170292854s) [] r=-1 lpr=38 pi=[30,38)/1 crt=0'0 mlcod 0'0 active pruub 87.349487305s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:33 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 38 pg[5.b( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=38 pruub=13.170292854s) [] r=-1 lpr=38 pi=[30,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.349487305s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:33 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 38 pg[7.a( empty local-lis/les=35/36 n=0 ec=32/21 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=9.621390343s) [] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active pruub 83.800643921s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:33 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 38 pg[7.a( empty local-lis/les=35/36 n=0 ec=32/21 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=9.621390343s) [] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.800643921s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:33 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 38 pg[5.8( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=38 pruub=13.169857979s) [] r=-1 lpr=38 pi=[30,38)/1 crt=0'0 mlcod 0'0 active pruub 87.349174500s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:33 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 38 pg[5.8( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=38 pruub=13.169857979s) [] r=-1 lpr=38 pi=[30,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.349174500s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:33 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 38 pg[7.14( empty local-lis/les=35/36 n=0 ec=32/21 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=9.621253967s) [] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active pruub 83.800666809s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:33 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 38 pg[7.14( empty local-lis/les=35/36 n=0 ec=32/21 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=9.621253967s) [] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.800666809s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:33 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 38 pg[2.10( empty local-lis/les=32/33 n=0 ec=26/13 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=8.803919792s) [] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 active pruub 82.983428955s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:33 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 38 pg[2.10( empty local-lis/les=32/33 n=0 ec=26/13 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=8.803919792s) [] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.983428955s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:33 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 38 pg[2.13( empty local-lis/les=32/33 n=0 ec=26/13 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=8.803866386s) [] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 active pruub 82.983428955s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:33 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 38 pg[2.13( empty local-lis/les=32/33 n=0 ec=26/13 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=8.803866386s) [] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.983428955s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:33 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 38 pg[4.14( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=38 pruub=11.100204468s) [] r=-1 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 active pruub 85.279830933s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:33 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 38 pg[4.14( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=38 pruub=11.100204468s) [] r=-1 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.279830933s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:33 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 38 pg[2.15( empty local-lis/les=32/33 n=0 ec=26/13 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=8.803400040s) [] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 active pruub 82.983070374s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:33 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 38 pg[2.15( empty local-lis/les=32/33 n=0 ec=26/13 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=8.803400040s) [] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.983070374s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:33 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 38 pg[5.12( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=38 pruub=13.171490669s) [] r=-1 lpr=38 pi=[30,38)/1 crt=0'0 mlcod 0'0 active pruub 87.351211548s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:33 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 38 pg[5.12( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=38 pruub=13.171490669s) [] r=-1 lpr=38 pi=[30,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.351211548s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:33 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 38 pg[5.13( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=38 pruub=13.171548843s) [] r=-1 lpr=38 pi=[30,38)/1 crt=0'0 mlcod 0'0 active pruub 87.351348877s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:33 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 38 pg[5.13( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=38 pruub=13.171548843s) [] r=-1 lpr=38 pi=[30,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.351348877s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:33 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 38 pg[7.1d( empty local-lis/les=35/36 n=0 ec=32/21 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=9.624003410s) [] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active pruub 83.803916931s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:33 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 38 pg[7.1d( empty local-lis/les=35/36 n=0 ec=32/21 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=9.624003410s) [] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.803916931s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:33 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 38 pg[2.c( empty local-lis/les=32/33 n=0 ec=26/13 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=8.803214073s) [] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 active pruub 82.983207703s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:33 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 38 pg[2.c( empty local-lis/les=32/33 n=0 ec=26/13 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=8.803214073s) [] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.983207703s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:33 compute-0 ceph-mon[74426]: 3.11 scrub starts
Dec 11 09:15:33 compute-0 ceph-mon[74426]: 3.11 scrub ok
Dec 11 09:15:33 compute-0 ceph-mon[74426]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Dec 11 09:15:33 compute-0 ceph-mon[74426]: osdmap e37: 3 total, 2 up, 3 in
Dec 11 09:15:33 compute-0 ceph-mon[74426]: from='osd.2 [v2:192.168.122.102:6800/1051349850,v1:192.168.122.102:6801/1051349850]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Dec 11 09:15:33 compute-0 ceph-mon[74426]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Dec 11 09:15:33 compute-0 ceph-mon[74426]: 5.0 deep-scrub starts
Dec 11 09:15:33 compute-0 ceph-mon[74426]: 5.0 deep-scrub ok
Dec 11 09:15:33 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Dec 11 09:15:33 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Dec 11 09:15:33 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:33.380+0000 7fdde5d1d140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 11 09:15:33 compute-0 ceph-mgr[74715]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 11 09:15:33 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'selftest'
Dec 11 09:15:33 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:33.470+0000 7fdde5d1d140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 11 09:15:33 compute-0 ceph-mgr[74715]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 11 09:15:33 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'snap_schedule'
Dec 11 09:15:33 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:33.557+0000 7fdde5d1d140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 11 09:15:33 compute-0 ceph-mgr[74715]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 11 09:15:33 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'stats'
Dec 11 09:15:33 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'status'
Dec 11 09:15:33 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:33.718+0000 7fdde5d1d140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec 11 09:15:33 compute-0 ceph-mgr[74715]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec 11 09:15:33 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'telegraf'
Dec 11 09:15:33 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:33.792+0000 7fdde5d1d140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 11 09:15:33 compute-0 ceph-mgr[74715]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 11 09:15:33 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'telemetry'
Dec 11 09:15:33 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.unesvp restarted
Dec 11 09:15:33 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.unesvp started
Dec 11 09:15:33 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:33.969+0000 7fdde5d1d140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 11 09:15:33 compute-0 ceph-mgr[74715]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 11 09:15:33 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'test_orchestrator'
Dec 11 09:15:33 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.uiimcn restarted
Dec 11 09:15:33 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.uiimcn started
Dec 11 09:15:33 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e38 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:15:34 compute-0 ceph-mon[74426]: 3.e scrub starts
Dec 11 09:15:34 compute-0 ceph-mon[74426]: 3.e scrub ok
Dec 11 09:15:34 compute-0 ceph-mon[74426]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Dec 11 09:15:34 compute-0 ceph-mon[74426]: osdmap e38: 3 total, 2 up, 3 in
Dec 11 09:15:34 compute-0 ceph-mon[74426]: 3.7 scrub starts
Dec 11 09:15:34 compute-0 ceph-mon[74426]: 3.7 scrub ok
Dec 11 09:15:34 compute-0 ceph-mon[74426]: Standby manager daemon compute-1.unesvp restarted
Dec 11 09:15:34 compute-0 ceph-mon[74426]: Standby manager daemon compute-1.unesvp started
Dec 11 09:15:34 compute-0 ceph-mon[74426]: Standby manager daemon compute-2.uiimcn restarted
Dec 11 09:15:34 compute-0 ceph-mon[74426]: Standby manager daemon compute-2.uiimcn started
Dec 11 09:15:34 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : mgrmap e14: compute-0.wwpcae(active, since 2m), standbys: compute-2.uiimcn, compute-1.unesvp
Dec 11 09:15:34 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:34.214+0000 7fdde5d1d140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'volumes'
Dec 11 09:15:34 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 4.0 scrub starts
Dec 11 09:15:34 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 4.0 scrub ok
Dec 11 09:15:34 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:34.504+0000 7fdde5d1d140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'zabbix'
Dec 11 09:15:34 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:34.590+0000 7fdde5d1d140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 11 09:15:34 compute-0 ceph-mon[74426]: log_channel(cluster) log [INF] : Active manager daemon compute-0.wwpcae restarted
Dec 11 09:15:34 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Dec 11 09:15:34 compute-0 ceph-mon[74426]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.wwpcae
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: ms_deliver_dispatch: unhandled message 0x555601f95860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec 11 09:15:34 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e39 e39: 3 total, 2 up, 3 in
Dec 11 09:15:34 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 2 up, 3 in
Dec 11 09:15:34 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : mgrmap e15: compute-0.wwpcae(active, starting, since 0.0440277s), standbys: compute-2.uiimcn, compute-1.unesvp
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: mgr handle_mgr_map Activating!
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: mgr handle_mgr_map I am now activating
Dec 11 09:15:34 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec 11 09:15:34 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 11 09:15:34 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 11 09:15:34 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 11 09:15:34 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec 11 09:15:34 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 11 09:15:34 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.wwpcae", "id": "compute-0.wwpcae"} v 0)
Dec 11 09:15:34 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mgr metadata", "who": "compute-0.wwpcae", "id": "compute-0.wwpcae"}]: dispatch
Dec 11 09:15:34 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.uiimcn", "id": "compute-2.uiimcn"} v 0)
Dec 11 09:15:34 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mgr metadata", "who": "compute-2.uiimcn", "id": "compute-2.uiimcn"}]: dispatch
Dec 11 09:15:34 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.unesvp", "id": "compute-1.unesvp"} v 0)
Dec 11 09:15:34 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mgr metadata", "who": "compute-1.unesvp", "id": "compute-1.unesvp"}]: dispatch
Dec 11 09:15:34 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 11 09:15:34 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 11 09:15:34 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 11 09:15:34 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 11 09:15:34 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 11 09:15:34 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 11 09:15:34 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Dec 11 09:15:34 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec 11 09:15:34 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).mds e1 all = 1
Dec 11 09:15:34 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec 11 09:15:34 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 11 09:15:34 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Dec 11 09:15:34 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: mgr load_all_metadata Skipping incomplete metadata entry
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: mgr load Constructed class from module: balancer
Dec 11 09:15:34 compute-0 ceph-mon[74426]: log_channel(cluster) log [INF] : Manager daemon compute-0.wwpcae is now available
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [balancer INFO root] Starting
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [balancer INFO root] Optimize plan auto_2025-12-11_09:15:34
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: mgr load Constructed class from module: cephadm
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: mgr load Constructed class from module: crash
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: mgr load Constructed class from module: dashboard
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO access_control] Loading user roles DB version=2
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO sso] Loading SSO DB version=1
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO root] Configured CherryPy, starting engine...
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: mgr load Constructed class from module: devicehealth
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [devicehealth INFO root] Starting
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: mgr load Constructed class from module: iostat
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: mgr load Constructed class from module: nfs
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: mgr load Constructed class from module: orchestrator
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: mgr load Constructed class from module: pg_autoscaler
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: mgr load Constructed class from module: progress
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] _maybe_adjust
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [progress INFO root] Loading...
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7fdd680ce9a0>, <progress.module.GhostEvent object at 0x7fdd680ce9d0>, <progress.module.GhostEvent object at 0x7fdd680cea00>, <progress.module.GhostEvent object at 0x7fdd680cea30>, <progress.module.GhostEvent object at 0x7fdd680cea60>, <progress.module.GhostEvent object at 0x7fdd680cea90>, <progress.module.GhostEvent object at 0x7fdd680ceac0>, <progress.module.GhostEvent object at 0x7fdd680ceaf0>, <progress.module.GhostEvent object at 0x7fdd680ceb20>, <progress.module.GhostEvent object at 0x7fdd680ceb50>, <progress.module.GhostEvent object at 0x7fdd680ceb80>, <progress.module.GhostEvent object at 0x7fdd680cebb0>, <progress.module.GhostEvent object at 0x7fdd680cebe0>] historic events
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [progress INFO root] Loaded OSDMap, ready.
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [rbd_support INFO root] recovery thread starting
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [rbd_support INFO root] starting setup
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: mgr load Constructed class from module: rbd_support
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: mgr load Constructed class from module: restful
Dec 11 09:15:34 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wwpcae/mirror_snapshot_schedule"} v 0)
Dec 11 09:15:34 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wwpcae/mirror_snapshot_schedule"}]: dispatch
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: mgr load Constructed class from module: status
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [restful INFO root] server_addr: :: server_port: 8003
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [restful WARNING root] server not running: no certificate configured
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: mgr load Constructed class from module: telemetry
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: mgr load Constructed class from module: volumes
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Dec 11 09:15:34 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Dec 11 09:15:35 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Dec 11 09:15:35 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Dec 11 09:15:35 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Dec 11 09:15:35 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Dec 11 09:15:35 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Dec 11 09:15:35 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Dec 11 09:15:35 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Dec 11 09:15:35 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Dec 11 09:15:35 compute-0 sshd-session[88300]: Accepted publickey for ceph-admin from 192.168.122.100 port 52114 ssh2: RSA SHA256:VeIx2NZka5hi0niQjHCLie+FE2InrWghbFhBMbpMPGo
Dec 11 09:15:35 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Dec 11 09:15:35 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Dec 11 09:15:35 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Dec 11 09:15:35 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Dec 11 09:15:35 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Dec 11 09:15:35 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Dec 11 09:15:35 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Dec 11 09:15:35 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Dec 11 09:15:35 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Dec 11 09:15:35 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Dec 11 09:15:35 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Dec 11 09:15:35 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Dec 11 09:15:35 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Dec 11 09:15:35 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Dec 11 09:15:35 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Dec 11 09:15:35 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Dec 11 09:15:35 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Dec 11 09:15:35 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Dec 11 09:15:35 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Dec 11 09:15:35 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Dec 11 09:15:35 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Dec 11 09:15:35 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Dec 11 09:15:35 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Dec 11 09:15:35 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Dec 11 09:15:35 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Dec 11 09:15:35 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Dec 11 09:15:35 compute-0 systemd-logind[792]: New session 34 of user ceph-admin.
Dec 11 09:15:35 compute-0 systemd[1]: Started Session 34 of User ceph-admin.
Dec 11 09:15:35 compute-0 sshd-session[88300]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 11 09:15:35 compute-0 ceph-mon[74426]: purged_snaps scrub starts
Dec 11 09:15:35 compute-0 ceph-mon[74426]: purged_snaps scrub ok
Dec 11 09:15:35 compute-0 ceph-mon[74426]: 5.9 scrub starts
Dec 11 09:15:35 compute-0 ceph-mon[74426]: 5.9 scrub ok
Dec 11 09:15:35 compute-0 ceph-mon[74426]: mgrmap e14: compute-0.wwpcae(active, since 2m), standbys: compute-2.uiimcn, compute-1.unesvp
Dec 11 09:15:35 compute-0 ceph-mon[74426]: 4.0 scrub starts
Dec 11 09:15:35 compute-0 ceph-mon[74426]: 4.0 scrub ok
Dec 11 09:15:35 compute-0 ceph-mon[74426]: Active manager daemon compute-0.wwpcae restarted
Dec 11 09:15:35 compute-0 ceph-mon[74426]: Activating manager daemon compute-0.wwpcae
Dec 11 09:15:35 compute-0 ceph-mon[74426]: osdmap e39: 3 total, 2 up, 3 in
Dec 11 09:15:35 compute-0 ceph-mon[74426]: mgrmap e15: compute-0.wwpcae(active, starting, since 0.0440277s), standbys: compute-2.uiimcn, compute-1.unesvp
Dec 11 09:15:35 compute-0 ceph-mon[74426]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 11 09:15:35 compute-0 ceph-mon[74426]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 11 09:15:35 compute-0 ceph-mon[74426]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 11 09:15:35 compute-0 ceph-mon[74426]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mgr metadata", "who": "compute-0.wwpcae", "id": "compute-0.wwpcae"}]: dispatch
Dec 11 09:15:35 compute-0 ceph-mon[74426]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mgr metadata", "who": "compute-2.uiimcn", "id": "compute-2.uiimcn"}]: dispatch
Dec 11 09:15:35 compute-0 ceph-mon[74426]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mgr metadata", "who": "compute-1.unesvp", "id": "compute-1.unesvp"}]: dispatch
Dec 11 09:15:35 compute-0 ceph-mon[74426]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 11 09:15:35 compute-0 ceph-mon[74426]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 11 09:15:35 compute-0 ceph-mon[74426]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 11 09:15:35 compute-0 ceph-mon[74426]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec 11 09:15:35 compute-0 ceph-mon[74426]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 11 09:15:35 compute-0 ceph-mon[74426]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec 11 09:15:35 compute-0 ceph-mon[74426]: Manager daemon compute-0.wwpcae is now available
Dec 11 09:15:35 compute-0 ceph-mon[74426]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wwpcae/mirror_snapshot_schedule"}]: dispatch
Dec 11 09:15:35 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Dec 11 09:15:35 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.module] Engine started.
Dec 11 09:15:35 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Dec 11 09:15:35 compute-0 sudo[88315]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:15:35 compute-0 sudo[88315]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:35 compute-0 sudo[88315]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:35 compute-0 sudo[88341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Dec 11 09:15:35 compute-0 sudo[88341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:36 compute-0 ceph-mgr[74715]: [cephadm INFO cherrypy.error] [11/Dec/2025:09:15:36] ENGINE Bus STARTING
Dec 11 09:15:36 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : [11/Dec/2025:09:15:36] ENGINE Bus STARTING
Dec 11 09:15:36 compute-0 podman[88436]: 2025-12-11 09:15:36.189732566 +0000 UTC m=+0.078869434 container exec 9c08b5b1828392d2cc014af7ec9c415c9c915d4b6dd9798da9403e6929851bd4 (image=quay.io/ceph/ceph:v19, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec 11 09:15:36 compute-0 ceph-mgr[74715]: [cephadm INFO cherrypy.error] [11/Dec/2025:09:15:36] ENGINE Serving on http://192.168.122.100:8765
Dec 11 09:15:36 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : [11/Dec/2025:09:15:36] ENGINE Serving on http://192.168.122.100:8765
Dec 11 09:15:36 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Dec 11 09:15:36 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Dec 11 09:15:36 compute-0 podman[88436]: 2025-12-11 09:15:36.338899148 +0000 UTC m=+0.228035976 container exec_died 9c08b5b1828392d2cc014af7ec9c415c9c915d4b6dd9798da9403e6929851bd4 (image=quay.io/ceph/ceph:v19, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 11 09:15:36 compute-0 ceph-mgr[74715]: [cephadm INFO cherrypy.error] [11/Dec/2025:09:15:36] ENGINE Serving on https://192.168.122.100:7150
Dec 11 09:15:36 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : [11/Dec/2025:09:15:36] ENGINE Serving on https://192.168.122.100:7150
Dec 11 09:15:36 compute-0 ceph-mgr[74715]: [cephadm INFO cherrypy.error] [11/Dec/2025:09:15:36] ENGINE Bus STARTED
Dec 11 09:15:36 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : [11/Dec/2025:09:15:36] ENGINE Bus STARTED
Dec 11 09:15:36 compute-0 ceph-mgr[74715]: [cephadm INFO cherrypy.error] [11/Dec/2025:09:15:36] ENGINE Client ('192.168.122.100', 45858) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 11 09:15:36 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : [11/Dec/2025:09:15:36] ENGINE Client ('192.168.122.100', 45858) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 11 09:15:36 compute-0 ceph-mgr[74715]: log_channel(audit) log [DBG] : from='client.14319 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-username", "value": "admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 11 09:15:36 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : mgrmap e16: compute-0.wwpcae(active, since 1.90983s), standbys: compute-2.uiimcn, compute-1.unesvp
Dec 11 09:15:36 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v3: 193 pgs: 136 active+clean, 57 unknown; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:15:36 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_USERNAME}] v 0)
Dec 11 09:15:36 compute-0 ceph-mgr[74715]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1051349850; not ready for session (expect reconnect)
Dec 11 09:15:36 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 11 09:15:36 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 11 09:15:36 compute-0 ceph-mon[74426]: 3.c scrub starts
Dec 11 09:15:36 compute-0 ceph-mon[74426]: 3.c scrub ok
Dec 11 09:15:36 compute-0 ceph-mon[74426]: 4.7 scrub starts
Dec 11 09:15:36 compute-0 ceph-mon[74426]: 4.7 scrub ok
Dec 11 09:15:36 compute-0 ceph-mgr[74715]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 11 09:15:36 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:36 compute-0 adoring_williams[88153]: Option GRAFANA_API_USERNAME updated
Dec 11 09:15:36 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v4: 193 pgs: 136 active+clean, 57 unknown; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:15:36 compute-0 systemd[1]: libpod-f77f2f0622c0bf336494b240b3beb4ce7cd0038e46148ae49a77f561da368626.scope: Deactivated successfully.
Dec 11 09:15:36 compute-0 sudo[88341]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:36 compute-0 podman[88138]: 2025-12-11 09:15:36.696484369 +0000 UTC m=+8.863783094 container died f77f2f0622c0bf336494b240b3beb4ce7cd0038e46148ae49a77f561da368626 (image=quay.io/ceph/ceph:v19, name=adoring_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec 11 09:15:36 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 11 09:15:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-bbc4b1161a971ba641bceaee871148b7d20afda9983e938b38bf6ff1dd188d9a-merged.mount: Deactivated successfully.
Dec 11 09:15:36 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:36 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 11 09:15:36 compute-0 podman[88138]: 2025-12-11 09:15:36.778410029 +0000 UTC m=+8.945708724 container remove f77f2f0622c0bf336494b240b3beb4ce7cd0038e46148ae49a77f561da368626 (image=quay.io/ceph/ceph:v19, name=adoring_williams, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 11 09:15:36 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:36 compute-0 systemd[1]: libpod-conmon-f77f2f0622c0bf336494b240b3beb4ce7cd0038e46148ae49a77f561da368626.scope: Deactivated successfully.
Dec 11 09:15:36 compute-0 ceph-mgr[74715]: [devicehealth INFO root] Check health
Dec 11 09:15:36 compute-0 sudo[88135]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:36 compute-0 sudo[88571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:15:36 compute-0 sudo[88571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:36 compute-0 sudo[88571]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:36 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 11 09:15:36 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:36 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 11 09:15:36 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:36 compute-0 sudo[88596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 11 09:15:36 compute-0 sudo[88596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:36 compute-0 sudo[88644]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyishmmscdzadgnwcngtdaxyrclqaomj ; /usr/bin/python3'
Dec 11 09:15:37 compute-0 sudo[88644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:15:37 compute-0 python3[88646]: ansible-ansible.legacy.command Invoked with stdin=/home/grafana_password.yml stdin_add_newline=False _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-password -i - _uses_shell=False strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None
Dec 11 09:15:37 compute-0 podman[88648]: 2025-12-11 09:15:37.217741515 +0000 UTC m=+0.053221294 container create 4c5b25e2f09a9eb3515589872c3eb959ca1dfd3fa0f2d8ab14d5225391eff75f (image=quay.io/ceph/ceph:v19, name=naughty_wu, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec 11 09:15:37 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 5.c scrub starts
Dec 11 09:15:37 compute-0 systemd[1]: Started libpod-conmon-4c5b25e2f09a9eb3515589872c3eb959ca1dfd3fa0f2d8ab14d5225391eff75f.scope.
Dec 11 09:15:37 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 5.c scrub ok
Dec 11 09:15:37 compute-0 podman[88648]: 2025-12-11 09:15:37.193330714 +0000 UTC m=+0.028810523 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:15:37 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:15:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79c76b2c4f6a943e418cf8cf24ad9451caa9e739eac662382bc3b5b07d54134f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79c76b2c4f6a943e418cf8cf24ad9451caa9e739eac662382bc3b5b07d54134f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79c76b2c4f6a943e418cf8cf24ad9451caa9e739eac662382bc3b5b07d54134f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:37 compute-0 podman[88648]: 2025-12-11 09:15:37.315223594 +0000 UTC m=+0.150703493 container init 4c5b25e2f09a9eb3515589872c3eb959ca1dfd3fa0f2d8ab14d5225391eff75f (image=quay.io/ceph/ceph:v19, name=naughty_wu, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec 11 09:15:37 compute-0 podman[88648]: 2025-12-11 09:15:37.326235337 +0000 UTC m=+0.161715126 container start 4c5b25e2f09a9eb3515589872c3eb959ca1dfd3fa0f2d8ab14d5225391eff75f (image=quay.io/ceph/ceph:v19, name=naughty_wu, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 11 09:15:37 compute-0 podman[88648]: 2025-12-11 09:15:37.330887605 +0000 UTC m=+0.166367374 container attach 4c5b25e2f09a9eb3515589872c3eb959ca1dfd3fa0f2d8ab14d5225391eff75f (image=quay.io/ceph/ceph:v19, name=naughty_wu, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 11 09:15:37 compute-0 sudo[88596]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:37 compute-0 sudo[88716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:15:37 compute-0 sudo[88716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:37 compute-0 sudo[88716]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:37 compute-0 ceph-mgr[74715]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1051349850; not ready for session (expect reconnect)
Dec 11 09:15:37 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 11 09:15:37 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 11 09:15:37 compute-0 ceph-mgr[74715]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 11 09:15:37 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : mgrmap e17: compute-0.wwpcae(active, since 3s), standbys: compute-2.uiimcn, compute-1.unesvp
Dec 11 09:15:37 compute-0 ceph-mon[74426]: 6.a scrub starts
Dec 11 09:15:37 compute-0 ceph-mon[74426]: 6.a scrub ok
Dec 11 09:15:37 compute-0 ceph-mon[74426]: [11/Dec/2025:09:15:36] ENGINE Bus STARTING
Dec 11 09:15:37 compute-0 ceph-mon[74426]: [11/Dec/2025:09:15:36] ENGINE Serving on http://192.168.122.100:8765
Dec 11 09:15:37 compute-0 ceph-mon[74426]: 5.6 scrub starts
Dec 11 09:15:37 compute-0 ceph-mon[74426]: 5.6 scrub ok
Dec 11 09:15:37 compute-0 ceph-mon[74426]: [11/Dec/2025:09:15:36] ENGINE Serving on https://192.168.122.100:7150
Dec 11 09:15:37 compute-0 ceph-mon[74426]: [11/Dec/2025:09:15:36] ENGINE Bus STARTED
Dec 11 09:15:37 compute-0 ceph-mon[74426]: [11/Dec/2025:09:15:36] ENGINE Client ('192.168.122.100', 45858) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 11 09:15:37 compute-0 ceph-mon[74426]: from='client.14319 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-username", "value": "admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 11 09:15:37 compute-0 ceph-mon[74426]: mgrmap e16: compute-0.wwpcae(active, since 1.90983s), standbys: compute-2.uiimcn, compute-1.unesvp
Dec 11 09:15:37 compute-0 ceph-mon[74426]: pgmap v3: 193 pgs: 136 active+clean, 57 unknown; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:15:37 compute-0 ceph-mon[74426]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 11 09:15:37 compute-0 ceph-mon[74426]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:37 compute-0 ceph-mon[74426]: pgmap v4: 193 pgs: 136 active+clean, 57 unknown; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:15:37 compute-0 ceph-mon[74426]: 6.8 scrub starts
Dec 11 09:15:37 compute-0 ceph-mon[74426]: 6.8 scrub ok
Dec 11 09:15:37 compute-0 ceph-mon[74426]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:37 compute-0 ceph-mon[74426]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:37 compute-0 ceph-mon[74426]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:37 compute-0 ceph-mon[74426]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:37 compute-0 sudo[88741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Dec 11 09:15:37 compute-0 sudo[88741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:37 compute-0 ceph-mgr[74715]: log_channel(audit) log [DBG] : from='client.14349 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-password", "target": ["mon-mgr", ""]}]: dispatch
Dec 11 09:15:37 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_PASSWORD}] v 0)
Dec 11 09:15:37 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:37 compute-0 naughty_wu[88676]: Option GRAFANA_API_PASSWORD updated
Dec 11 09:15:37 compute-0 systemd[1]: libpod-4c5b25e2f09a9eb3515589872c3eb959ca1dfd3fa0f2d8ab14d5225391eff75f.scope: Deactivated successfully.
Dec 11 09:15:37 compute-0 podman[88768]: 2025-12-11 09:15:37.881573213 +0000 UTC m=+0.026989434 container died 4c5b25e2f09a9eb3515589872c3eb959ca1dfd3fa0f2d8ab14d5225391eff75f (image=quay.io/ceph/ceph:v19, name=naughty_wu, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 11 09:15:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-79c76b2c4f6a943e418cf8cf24ad9451caa9e739eac662382bc3b5b07d54134f-merged.mount: Deactivated successfully.
Dec 11 09:15:37 compute-0 podman[88768]: 2025-12-11 09:15:37.924718854 +0000 UTC m=+0.070135055 container remove 4c5b25e2f09a9eb3515589872c3eb959ca1dfd3fa0f2d8ab14d5225391eff75f (image=quay.io/ceph/ceph:v19, name=naughty_wu, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 11 09:15:37 compute-0 systemd[1]: libpod-conmon-4c5b25e2f09a9eb3515589872c3eb959ca1dfd3fa0f2d8ab14d5225391eff75f.scope: Deactivated successfully.
Dec 11 09:15:37 compute-0 sudo[88644]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:37 compute-0 sudo[88741]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:37 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 11 09:15:37 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 11 09:15:38 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:38 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 11 09:15:38 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 11 09:15:38 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:38 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 11 09:15:38 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:38 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Dec 11 09:15:38 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Dec 11 09:15:38 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:38 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 11 09:15:38 compute-0 sudo[88822]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfxyriijkxavgousmxgisizzzbsjnzbd ; /usr/bin/python3'
Dec 11 09:15:38 compute-0 sudo[88822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:15:38 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:38 compute-0 ceph-mgr[74715]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 127.9M
Dec 11 09:15:38 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 127.9M
Dec 11 09:15:38 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec 11 09:15:38 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:38 compute-0 ceph-mgr[74715]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 134203392: error parsing value: Value '134203392' is below minimum 939524096
Dec 11 09:15:38 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 134203392: error parsing value: Value '134203392' is below minimum 939524096
Dec 11 09:15:38 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Dec 11 09:15:38 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Dec 11 09:15:38 compute-0 ceph-mgr[74715]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to 127.9M
Dec 11 09:15:38 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to 127.9M
Dec 11 09:15:38 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec 11 09:15:38 compute-0 ceph-mgr[74715]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-1 to 134209126: error parsing value: Value '134209126' is below minimum 939524096
Dec 11 09:15:38 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-1 to 134209126: error parsing value: Value '134209126' is below minimum 939524096
Dec 11 09:15:38 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 6.f deep-scrub starts
Dec 11 09:15:38 compute-0 python3[88824]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-alertmanager-api-host http://192.168.122.100:9093
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:15:38 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 6.f deep-scrub ok
Dec 11 09:15:38 compute-0 podman[88825]: 2025-12-11 09:15:38.377579352 +0000 UTC m=+0.044053070 container create 5898c057e2da45f0920a78a232cbdee958ae97edee31764b706c6e8366171bc8 (image=quay.io/ceph/ceph:v19, name=focused_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec 11 09:15:38 compute-0 systemd[1]: Started libpod-conmon-5898c057e2da45f0920a78a232cbdee958ae97edee31764b706c6e8366171bc8.scope.
Dec 11 09:15:38 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:15:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb9a5e68730c7fb1b784cde7a1421f6c64d6c508fda16fe484cac8fcd5a7ef9c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb9a5e68730c7fb1b784cde7a1421f6c64d6c508fda16fe484cac8fcd5a7ef9c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb9a5e68730c7fb1b784cde7a1421f6c64d6c508fda16fe484cac8fcd5a7ef9c/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:38 compute-0 podman[88825]: 2025-12-11 09:15:38.358485892 +0000 UTC m=+0.024959640 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:15:38 compute-0 podman[88825]: 2025-12-11 09:15:38.457667765 +0000 UTC m=+0.124141503 container init 5898c057e2da45f0920a78a232cbdee958ae97edee31764b706c6e8366171bc8 (image=quay.io/ceph/ceph:v19, name=focused_meitner, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 11 09:15:38 compute-0 podman[88825]: 2025-12-11 09:15:38.463835692 +0000 UTC m=+0.130309410 container start 5898c057e2da45f0920a78a232cbdee958ae97edee31764b706c6e8366171bc8 (image=quay.io/ceph/ceph:v19, name=focused_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec 11 09:15:38 compute-0 podman[88825]: 2025-12-11 09:15:38.467234741 +0000 UTC m=+0.133708459 container attach 5898c057e2da45f0920a78a232cbdee958ae97edee31764b706c6e8366171bc8 (image=quay.io/ceph/ceph:v19, name=focused_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:15:38 compute-0 ceph-mgr[74715]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1051349850; not ready for session (expect reconnect)
Dec 11 09:15:38 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 11 09:15:38 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 11 09:15:38 compute-0 ceph-mgr[74715]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 11 09:15:38 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v5: 193 pgs: 136 active+clean, 57 unknown; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:15:38 compute-0 ceph-mon[74426]: 5.c scrub starts
Dec 11 09:15:38 compute-0 ceph-mon[74426]: 5.c scrub ok
Dec 11 09:15:38 compute-0 ceph-mon[74426]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 11 09:15:38 compute-0 ceph-mon[74426]: mgrmap e17: compute-0.wwpcae(active, since 3s), standbys: compute-2.uiimcn, compute-1.unesvp
Dec 11 09:15:38 compute-0 ceph-mon[74426]: 3.f scrub starts
Dec 11 09:15:38 compute-0 ceph-mon[74426]: 3.f scrub ok
Dec 11 09:15:38 compute-0 ceph-mon[74426]: from='client.14349 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-password", "target": ["mon-mgr", ""]}]: dispatch
Dec 11 09:15:38 compute-0 ceph-mon[74426]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:38 compute-0 ceph-mon[74426]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:38 compute-0 ceph-mon[74426]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:38 compute-0 ceph-mon[74426]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:38 compute-0 ceph-mon[74426]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Dec 11 09:15:38 compute-0 ceph-mon[74426]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:38 compute-0 ceph-mon[74426]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:38 compute-0 ceph-mon[74426]: Adjusting osd_memory_target on compute-0 to 127.9M
Dec 11 09:15:38 compute-0 ceph-mon[74426]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:38 compute-0 ceph-mon[74426]: Unable to set osd_memory_target on compute-0 to 134203392: error parsing value: Value '134203392' is below minimum 939524096
Dec 11 09:15:38 compute-0 ceph-mon[74426]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Dec 11 09:15:38 compute-0 ceph-mon[74426]: Adjusting osd_memory_target on compute-1 to 127.9M
Dec 11 09:15:38 compute-0 ceph-mon[74426]: Unable to set osd_memory_target on compute-1 to 134209126: error parsing value: Value '134209126' is below minimum 939524096
Dec 11 09:15:38 compute-0 ceph-mon[74426]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 11 09:15:38 compute-0 ceph-mgr[74715]: log_channel(audit) log [DBG] : from='client.14355 -' entity='client.admin' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.122.100:9093", "target": ["mon-mgr", ""]}]: dispatch
Dec 11 09:15:38 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ALERTMANAGER_API_HOST}] v 0)
Dec 11 09:15:38 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:38 compute-0 focused_meitner[88842]: Option ALERTMANAGER_API_HOST updated
Dec 11 09:15:38 compute-0 systemd[1]: libpod-5898c057e2da45f0920a78a232cbdee958ae97edee31764b706c6e8366171bc8.scope: Deactivated successfully.
Dec 11 09:15:38 compute-0 podman[88825]: 2025-12-11 09:15:38.888481268 +0000 UTC m=+0.554954986 container died 5898c057e2da45f0920a78a232cbdee958ae97edee31764b706c6e8366171bc8 (image=quay.io/ceph/ceph:v19, name=focused_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:15:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb9a5e68730c7fb1b784cde7a1421f6c64d6c508fda16fe484cac8fcd5a7ef9c-merged.mount: Deactivated successfully.
Dec 11 09:15:38 compute-0 podman[88825]: 2025-12-11 09:15:38.929953105 +0000 UTC m=+0.596426823 container remove 5898c057e2da45f0920a78a232cbdee958ae97edee31764b706c6e8366171bc8 (image=quay.io/ceph/ceph:v19, name=focused_meitner, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec 11 09:15:38 compute-0 systemd[1]: libpod-conmon-5898c057e2da45f0920a78a232cbdee958ae97edee31764b706c6e8366171bc8.scope: Deactivated successfully.
Dec 11 09:15:38 compute-0 sudo[88822]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:39 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:15:39 compute-0 sudo[88901]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yraevhquowqmyqdyxotuwewndintzgha ; /usr/bin/python3'
Dec 11 09:15:39 compute-0 sudo[88901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:15:39 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : mgrmap e18: compute-0.wwpcae(active, since 4s), standbys: compute-2.uiimcn, compute-1.unesvp
Dec 11 09:15:39 compute-0 python3[88903]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-prometheus-api-host http://192.168.122.100:9092
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:15:39 compute-0 podman[88904]: 2025-12-11 09:15:39.293831747 +0000 UTC m=+0.039901218 container create f8c5e322125bac40ac158ef64ea57d9127bb466b7efefd395aacb33976a41854 (image=quay.io/ceph/ceph:v19, name=infallible_roentgen, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 11 09:15:39 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 3.b scrub starts
Dec 11 09:15:39 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 3.b scrub ok
Dec 11 09:15:39 compute-0 systemd[1]: Started libpod-conmon-f8c5e322125bac40ac158ef64ea57d9127bb466b7efefd395aacb33976a41854.scope.
Dec 11 09:15:39 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:15:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b6cd2b69b27fa74448fb2173433e6a1220e32869771d68c99ff00d3dd0caacf/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b6cd2b69b27fa74448fb2173433e6a1220e32869771d68c99ff00d3dd0caacf/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b6cd2b69b27fa74448fb2173433e6a1220e32869771d68c99ff00d3dd0caacf/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:39 compute-0 podman[88904]: 2025-12-11 09:15:39.367231065 +0000 UTC m=+0.113300556 container init f8c5e322125bac40ac158ef64ea57d9127bb466b7efefd395aacb33976a41854 (image=quay.io/ceph/ceph:v19, name=infallible_roentgen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:15:39 compute-0 podman[88904]: 2025-12-11 09:15:39.372619427 +0000 UTC m=+0.118688908 container start f8c5e322125bac40ac158ef64ea57d9127bb466b7efefd395aacb33976a41854 (image=quay.io/ceph/ceph:v19, name=infallible_roentgen, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:15:39 compute-0 podman[88904]: 2025-12-11 09:15:39.277212885 +0000 UTC m=+0.023282376 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:15:39 compute-0 podman[88904]: 2025-12-11 09:15:39.37519135 +0000 UTC m=+0.121260841 container attach f8c5e322125bac40ac158ef64ea57d9127bb466b7efefd395aacb33976a41854 (image=quay.io/ceph/ceph:v19, name=infallible_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 11 09:15:39 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 11 09:15:39 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:39 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 11 09:15:39 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:39 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Dec 11 09:15:39 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Dec 11 09:15:39 compute-0 ceph-mgr[74715]: [cephadm INFO root] Adjusting osd_memory_target on compute-2 to 127.9M
Dec 11 09:15:39 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-2 to 127.9M
Dec 11 09:15:39 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec 11 09:15:39 compute-0 ceph-mgr[74715]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-2 to 134209126: error parsing value: Value '134209126' is below minimum 939524096
Dec 11 09:15:39 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-2 to 134209126: error parsing value: Value '134209126' is below minimum 939524096
Dec 11 09:15:39 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 11 09:15:39 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:15:39 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 11 09:15:39 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 11 09:15:39 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Dec 11 09:15:39 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Dec 11 09:15:39 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Dec 11 09:15:39 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Dec 11 09:15:39 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Dec 11 09:15:39 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Dec 11 09:15:39 compute-0 ceph-mgr[74715]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1051349850; not ready for session (expect reconnect)
Dec 11 09:15:39 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 11 09:15:39 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 11 09:15:39 compute-0 ceph-mgr[74715]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 11 09:15:39 compute-0 sudo[88942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 11 09:15:39 compute-0 sudo[88942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:39 compute-0 sudo[88942]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:39 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Dec 11 09:15:39 compute-0 sudo[88967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph
Dec 11 09:15:39 compute-0 sudo[88967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:39 compute-0 sudo[88967]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:39 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Dec 11 09:15:39 compute-0 ceph-mon[74426]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.102:6800/1051349850,v1:192.168.122.102:6801/1051349850] boot
Dec 11 09:15:39 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Dec 11 09:15:39 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 11 09:15:39 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 11 09:15:39 compute-0 sudo[88992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.conf.new
Dec 11 09:15:39 compute-0 sudo[88992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:39 compute-0 sudo[88992]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:39 compute-0 ceph-mgr[74715]: log_channel(audit) log [DBG] : from='client.14361 -' entity='client.admin' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.122.100:9092", "target": ["mon-mgr", ""]}]: dispatch
Dec 11 09:15:39 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/PROMETHEUS_API_HOST}] v 0)
Dec 11 09:15:39 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:39 compute-0 infallible_roentgen[88919]: Option PROMETHEUS_API_HOST updated
Dec 11 09:15:39 compute-0 systemd[1]: libpod-f8c5e322125bac40ac158ef64ea57d9127bb466b7efefd395aacb33976a41854.scope: Deactivated successfully.
Dec 11 09:15:39 compute-0 podman[88904]: 2025-12-11 09:15:39.804234937 +0000 UTC m=+0.550304408 container died f8c5e322125bac40ac158ef64ea57d9127bb466b7efefd395aacb33976a41854 (image=quay.io/ceph/ceph:v19, name=infallible_roentgen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:15:39 compute-0 sudo[89017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:15:39 compute-0 sudo[89017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:39 compute-0 sudo[89017]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b6cd2b69b27fa74448fb2173433e6a1220e32869771d68c99ff00d3dd0caacf-merged.mount: Deactivated successfully.
Dec 11 09:15:39 compute-0 podman[88904]: 2025-12-11 09:15:39.842288063 +0000 UTC m=+0.588357534 container remove f8c5e322125bac40ac158ef64ea57d9127bb466b7efefd395aacb33976a41854 (image=quay.io/ceph/ceph:v19, name=infallible_roentgen, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec 11 09:15:39 compute-0 systemd[1]: libpod-conmon-f8c5e322125bac40ac158ef64ea57d9127bb466b7efefd395aacb33976a41854.scope: Deactivated successfully.
Dec 11 09:15:39 compute-0 sudo[88901]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:39 compute-0 sudo[89051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.conf.new
Dec 11 09:15:39 compute-0 ceph-mon[74426]: 6.f deep-scrub starts
Dec 11 09:15:39 compute-0 ceph-mon[74426]: 6.f deep-scrub ok
Dec 11 09:15:39 compute-0 ceph-mon[74426]: pgmap v5: 193 pgs: 136 active+clean, 57 unknown; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:15:39 compute-0 ceph-mon[74426]: 4.a scrub starts
Dec 11 09:15:39 compute-0 ceph-mon[74426]: 4.a scrub ok
Dec 11 09:15:39 compute-0 ceph-mon[74426]: OSD bench result of 5714.565967 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 11 09:15:39 compute-0 ceph-mon[74426]: from='client.14355 -' entity='client.admin' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.122.100:9093", "target": ["mon-mgr", ""]}]: dispatch
Dec 11 09:15:39 compute-0 ceph-mon[74426]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:39 compute-0 ceph-mon[74426]: mgrmap e18: compute-0.wwpcae(active, since 4s), standbys: compute-2.uiimcn, compute-1.unesvp
Dec 11 09:15:39 compute-0 ceph-mon[74426]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:39 compute-0 ceph-mon[74426]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:39 compute-0 ceph-mon[74426]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Dec 11 09:15:39 compute-0 ceph-mon[74426]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:15:39 compute-0 ceph-mon[74426]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 11 09:15:39 compute-0 ceph-mon[74426]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 11 09:15:39 compute-0 ceph-mon[74426]: osd.2 [v2:192.168.122.102:6800/1051349850,v1:192.168.122.102:6801/1051349850] boot
Dec 11 09:15:39 compute-0 ceph-mon[74426]: osdmap e40: 3 total, 3 up, 3 in
Dec 11 09:15:39 compute-0 sudo[89051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:39 compute-0 ceph-mon[74426]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 11 09:15:39 compute-0 ceph-mon[74426]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:39 compute-0 sudo[89051]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:39 compute-0 sudo[89102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.conf.new
Dec 11 09:15:39 compute-0 sudo[89102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:39 compute-0 sudo[89102]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:40 compute-0 sudo[89150]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuwtkygndqfwxjyixgdrbtbeemuadqpu ; /usr/bin/python3'
Dec 11 09:15:40 compute-0 sudo[89150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:15:40 compute-0 sudo[89151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.conf.new
Dec 11 09:15:40 compute-0 sudo[89151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:40 compute-0 sudo[89151]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:40 compute-0 sudo[89178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 11 09:15:40 compute-0 sudo[89178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:40 compute-0 sudo[89178]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:40 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf
Dec 11 09:15:40 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf
Dec 11 09:15:40 compute-0 python3[89165]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-url http://192.168.122.100:3100
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:15:40 compute-0 sudo[89203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config
Dec 11 09:15:40 compute-0 sudo[89203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:40 compute-0 sudo[89203]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:40 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf
Dec 11 09:15:40 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf
Dec 11 09:15:40 compute-0 podman[89226]: 2025-12-11 09:15:40.207662013 +0000 UTC m=+0.043312296 container create 499e73c9920491e5659c755ebb9240fb485af927d7901417a348047600576cac (image=quay.io/ceph/ceph:v19, name=trusting_swanson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:15:40 compute-0 systemd[1]: Started libpod-conmon-499e73c9920491e5659c755ebb9240fb485af927d7901417a348047600576cac.scope.
Dec 11 09:15:40 compute-0 sudo[89234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config
Dec 11 09:15:40 compute-0 sudo[89234]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:40 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf
Dec 11 09:15:40 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf
Dec 11 09:15:40 compute-0 sudo[89234]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:40 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:15:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90087d6b4e0827aa3d86f4ac96dd841b98e2f0e6a5b1ca26448ddd34611af38d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90087d6b4e0827aa3d86f4ac96dd841b98e2f0e6a5b1ca26448ddd34611af38d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90087d6b4e0827aa3d86f4ac96dd841b98e2f0e6a5b1ca26448ddd34611af38d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:40 compute-0 podman[89226]: 2025-12-11 09:15:40.279547183 +0000 UTC m=+0.115197486 container init 499e73c9920491e5659c755ebb9240fb485af927d7901417a348047600576cac (image=quay.io/ceph/ceph:v19, name=trusting_swanson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:15:40 compute-0 podman[89226]: 2025-12-11 09:15:40.189848543 +0000 UTC m=+0.025498836 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:15:40 compute-0 podman[89226]: 2025-12-11 09:15:40.286997681 +0000 UTC m=+0.122647964 container start 499e73c9920491e5659c755ebb9240fb485af927d7901417a348047600576cac (image=quay.io/ceph/ceph:v19, name=trusting_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Dec 11 09:15:40 compute-0 podman[89226]: 2025-12-11 09:15:40.29100409 +0000 UTC m=+0.126654403 container attach 499e73c9920491e5659c755ebb9240fb485af927d7901417a348047600576cac (image=quay.io/ceph/ceph:v19, name=trusting_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:15:40 compute-0 sudo[89271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf.new
Dec 11 09:15:40 compute-0 sudo[89271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:40 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 4.b scrub starts
Dec 11 09:15:40 compute-0 sudo[89271]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:40 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 4.b scrub ok
Dec 11 09:15:40 compute-0 sudo[89298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:15:40 compute-0 sudo[89298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:40 compute-0 sudo[89298]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:40 compute-0 sudo[89323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf.new
Dec 11 09:15:40 compute-0 sudo[89323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:40 compute-0 sudo[89323]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:40 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 40 pg[4.19( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=40 pruub=3.645566702s) [2] r=-1 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.281417847s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:40 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 40 pg[6.1b( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=40 pruub=5.711510658s) [2] r=-1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.347328186s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:40 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 40 pg[4.1c( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=40 pruub=3.650418997s) [2] r=-1 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.286239624s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:40 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 40 pg[3.1b( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=40 pruub=3.645462990s) [2] r=-1 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.281326294s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:40 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 40 pg[4.19( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=40 pruub=3.645343781s) [2] r=-1 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.281417847s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:40 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 40 pg[4.1c( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=40 pruub=3.650193691s) [2] r=-1 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.286239624s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:40 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 40 pg[6.1b( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=40 pruub=5.711278439s) [2] r=-1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.347328186s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:40 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 40 pg[3.1b( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=40 pruub=3.645237684s) [2] r=-1 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.281326294s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:40 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 40 pg[2.1b( empty local-lis/les=32/33 n=0 ec=26/13 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=1.346809387s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.983154297s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:40 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 40 pg[4.1d( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=40 pruub=3.644957542s) [2] r=-1 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.281341553s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:40 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 40 pg[2.1b( empty local-lis/les=32/33 n=0 ec=26/13 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=1.346794605s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.983154297s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:40 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 40 pg[4.1d( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=40 pruub=3.644941330s) [2] r=-1 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.281341553s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:40 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 40 pg[6.1( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=40 pruub=5.711069107s) [2] r=-1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.347648621s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:40 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 40 pg[4.3( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=40 pruub=3.644533157s) [2] r=-1 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.281112671s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:40 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 40 pg[3.8( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=40 pruub=3.644629478s) [2] r=-1 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.281219482s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:40 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 40 pg[3.8( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=40 pruub=3.644611597s) [2] r=-1 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.281219482s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:40 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 40 pg[6.1( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=40 pruub=5.711056232s) [2] r=-1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.347648621s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:40 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 40 pg[4.3( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=40 pruub=3.644512653s) [2] r=-1 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.281112671s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:40 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 40 pg[4.6( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=40 pruub=3.644038439s) [2] r=-1 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.281120300s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:40 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 40 pg[4.6( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=40 pruub=3.644024611s) [2] r=-1 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.281120300s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:40 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 40 pg[4.2( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=40 pruub=3.643713713s) [2] r=-1 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.280838013s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:40 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 40 pg[4.2( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=40 pruub=3.643697977s) [2] r=-1 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.280838013s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:40 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 40 pg[5.0( empty local-lis/les=30/31 n=0 ec=19/19 lis/c=30/30 les/c/f=31/31/0 sis=40 pruub=5.710855007s) [2] r=-1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.348152161s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:40 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 40 pg[5.0( empty local-lis/les=30/31 n=0 ec=19/19 lis/c=30/30 les/c/f=31/31/0 sis=40 pruub=5.710842133s) [2] r=-1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.348152161s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:40 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 40 pg[3.0( empty local-lis/les=28/29 n=0 ec=15/15 lis/c=28/28 les/c/f=29/29/0 sis=40 pruub=3.643076420s) [2] r=-1 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.280487061s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:40 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 40 pg[3.0( empty local-lis/les=28/29 n=0 ec=15/15 lis/c=28/28 les/c/f=29/29/0 sis=40 pruub=3.643064260s) [2] r=-1 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.280487061s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:40 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 40 pg[2.a( empty local-lis/les=32/33 n=0 ec=26/13 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=1.345902324s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.983421326s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:40 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 40 pg[5.d( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=40 pruub=5.711459160s) [2] r=-1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.348991394s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:40 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 40 pg[5.d( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=40 pruub=5.711448193s) [2] r=-1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.348991394s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:40 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 40 pg[2.a( empty local-lis/les=32/33 n=0 ec=26/13 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=1.345891714s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.983421326s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:40 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 40 pg[2.c( empty local-lis/les=32/33 n=0 ec=26/13 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=1.345548153s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.983207703s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:40 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 40 pg[2.d( empty local-lis/les=32/33 n=0 ec=26/13 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=1.345549822s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.983200073s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:40 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 40 pg[2.c( empty local-lis/les=32/33 n=0 ec=26/13 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=1.345538378s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.983207703s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:40 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 40 pg[5.b( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=40 pruub=5.711778641s) [2] r=-1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.349487305s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:40 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 40 pg[2.d( empty local-lis/les=32/33 n=0 ec=26/13 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=1.345516682s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.983200073s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:40 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 40 pg[5.b( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=40 pruub=5.711763382s) [2] r=-1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.349487305s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:40 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 40 pg[7.a( empty local-lis/les=35/36 n=0 ec=32/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=2.162859678s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.800643921s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:40 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 40 pg[5.8( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=40 pruub=5.711378574s) [2] r=-1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.349174500s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:40 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 40 pg[7.a( empty local-lis/les=35/36 n=0 ec=32/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=2.162848234s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.800643921s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:40 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 40 pg[5.8( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=40 pruub=5.711367130s) [2] r=-1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.349174500s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:40 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 40 pg[7.14( empty local-lis/les=35/36 n=0 ec=32/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=2.162765503s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.800666809s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:40 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 40 pg[7.14( empty local-lis/les=35/36 n=0 ec=32/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=2.162755013s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.800666809s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:40 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 40 pg[2.10( empty local-lis/les=32/33 n=0 ec=26/13 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=1.345276713s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.983428955s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:40 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 40 pg[2.10( empty local-lis/les=32/33 n=0 ec=26/13 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=1.345265031s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.983428955s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:40 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 40 pg[4.14( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=40 pruub=3.641518831s) [2] r=-1 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.279830933s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:40 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 40 pg[2.15( empty local-lis/les=32/33 n=0 ec=26/13 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=1.344725013s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.983070374s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:40 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 40 pg[2.13( empty local-lis/les=32/33 n=0 ec=26/13 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=1.345091105s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.983428955s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:40 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 40 pg[4.14( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=40 pruub=3.641490698s) [2] r=-1 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.279830933s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:40 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 40 pg[2.15( empty local-lis/les=32/33 n=0 ec=26/13 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=1.344710827s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.983070374s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:40 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 40 pg[5.12( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=40 pruub=5.712788582s) [2] r=-1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.351211548s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:40 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 40 pg[5.12( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=40 pruub=5.712744236s) [2] r=-1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.351211548s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:40 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 40 pg[5.13( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=40 pruub=5.712861538s) [2] r=-1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.351348877s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:40 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 40 pg[2.13( empty local-lis/les=32/33 n=0 ec=26/13 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=1.345050097s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.983428955s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:40 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 40 pg[5.13( empty local-lis/les=30/31 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=40 pruub=5.712848186s) [2] r=-1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.351348877s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:40 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 40 pg[7.1d( empty local-lis/les=35/36 n=0 ec=32/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=2.165278912s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.803916931s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:40 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 40 pg[7.1d( empty local-lis/les=35/36 n=0 ec=32/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=2.165266991s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.803916931s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:40 compute-0 sudo[89390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf.new
Dec 11 09:15:40 compute-0 sudo[89390]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:40 compute-0 sudo[89390]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:40 compute-0 sudo[89415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf.new
Dec 11 09:15:40 compute-0 sudo[89415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:40 compute-0 sudo[89415]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:40 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v7: 193 pgs: 57 peering, 136 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:15:40 compute-0 sudo[89440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf.new /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf
Dec 11 09:15:40 compute-0 sudo[89440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:40 compute-0 ceph-mgr[74715]: log_channel(audit) log [DBG] : from='client.14367 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "http://192.168.122.100:3100", "target": ["mon-mgr", ""]}]: dispatch
Dec 11 09:15:40 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_URL}] v 0)
Dec 11 09:15:40 compute-0 sudo[89440]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:40 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 11 09:15:40 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 11 09:15:40 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:40 compute-0 trusting_swanson[89267]: Option GRAFANA_API_URL updated
Dec 11 09:15:40 compute-0 systemd[1]: libpod-499e73c9920491e5659c755ebb9240fb485af927d7901417a348047600576cac.scope: Deactivated successfully.
Dec 11 09:15:40 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec 11 09:15:40 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec 11 09:15:40 compute-0 podman[89226]: 2025-12-11 09:15:40.731226863 +0000 UTC m=+0.566877156 container died 499e73c9920491e5659c755ebb9240fb485af927d7901417a348047600576cac (image=quay.io/ceph/ceph:v19, name=trusting_swanson, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec 11 09:15:40 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Dec 11 09:15:40 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Dec 11 09:15:40 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Dec 11 09:15:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-90087d6b4e0827aa3d86f4ac96dd841b98e2f0e6a5b1ca26448ddd34611af38d-merged.mount: Deactivated successfully.
Dec 11 09:15:40 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 11 09:15:40 compute-0 sudo[89466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 11 09:15:40 compute-0 sudo[89466]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:40 compute-0 podman[89226]: 2025-12-11 09:15:40.775425078 +0000 UTC m=+0.611075351 container remove 499e73c9920491e5659c755ebb9240fb485af927d7901417a348047600576cac (image=quay.io/ceph/ceph:v19, name=trusting_swanson, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 11 09:15:40 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 11 09:15:40 compute-0 sudo[89466]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:40 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 11 09:15:40 compute-0 ceph-mgr[74715]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec 11 09:15:40 compute-0 ceph-mgr[74715]: [rbd_support INFO root] PerfHandler: starting
Dec 11 09:15:40 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_task_task: vms, start_after=
Dec 11 09:15:40 compute-0 systemd[1]: libpod-conmon-499e73c9920491e5659c755ebb9240fb485af927d7901417a348047600576cac.scope: Deactivated successfully.
Dec 11 09:15:40 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_task_task: volumes, start_after=
Dec 11 09:15:40 compute-0 sudo[89150]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:40 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_task_task: backups, start_after=
Dec 11 09:15:40 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_task_task: images, start_after=
Dec 11 09:15:40 compute-0 ceph-mgr[74715]: [rbd_support INFO root] TaskHandler: starting
Dec 11 09:15:40 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wwpcae/trash_purge_schedule"} v 0)
Dec 11 09:15:40 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wwpcae/trash_purge_schedule"}]: dispatch
Dec 11 09:15:40 compute-0 ceph-mgr[74715]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 11 09:15:40 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 11 09:15:40 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 11 09:15:40 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 11 09:15:40 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 11 09:15:40 compute-0 ceph-mgr[74715]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec 11 09:15:40 compute-0 ceph-mgr[74715]: [rbd_support INFO root] setup complete
Dec 11 09:15:40 compute-0 sudo[89504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph
Dec 11 09:15:40 compute-0 sudo[89504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:40 compute-0 sudo[89504]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:40 compute-0 ceph-mon[74426]: 3.b scrub starts
Dec 11 09:15:40 compute-0 ceph-mon[74426]: 3.b scrub ok
Dec 11 09:15:40 compute-0 ceph-mon[74426]: Adjusting osd_memory_target on compute-2 to 127.9M
Dec 11 09:15:40 compute-0 ceph-mon[74426]: Unable to set osd_memory_target on compute-2 to 134209126: error parsing value: Value '134209126' is below minimum 939524096
Dec 11 09:15:40 compute-0 ceph-mon[74426]: Updating compute-0:/etc/ceph/ceph.conf
Dec 11 09:15:40 compute-0 ceph-mon[74426]: Updating compute-1:/etc/ceph/ceph.conf
Dec 11 09:15:40 compute-0 ceph-mon[74426]: Updating compute-2:/etc/ceph/ceph.conf
Dec 11 09:15:40 compute-0 ceph-mon[74426]: 3.d scrub starts
Dec 11 09:15:40 compute-0 ceph-mon[74426]: 3.d scrub ok
Dec 11 09:15:40 compute-0 ceph-mon[74426]: from='client.14361 -' entity='client.admin' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.122.100:9092", "target": ["mon-mgr", ""]}]: dispatch
Dec 11 09:15:40 compute-0 ceph-mon[74426]: Updating compute-0:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf
Dec 11 09:15:40 compute-0 ceph-mon[74426]: Updating compute-1:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf
Dec 11 09:15:40 compute-0 ceph-mon[74426]: Updating compute-2:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf
Dec 11 09:15:40 compute-0 ceph-mon[74426]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:40 compute-0 ceph-mon[74426]: osdmap e41: 3 total, 3 up, 3 in
Dec 11 09:15:40 compute-0 ceph-mon[74426]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wwpcae/trash_purge_schedule"}]: dispatch
Dec 11 09:15:40 compute-0 sudo[89531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.client.admin.keyring.new
Dec 11 09:15:40 compute-0 sudo[89531]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:40 compute-0 sudo[89531]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:40 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec 11 09:15:40 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec 11 09:15:40 compute-0 sudo[89582]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjalbqjfrbrjneejsixglrzwffmdvpaw ; /usr/bin/python3'
Dec 11 09:15:40 compute-0 sudo[89582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:15:40 compute-0 sudo[89579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:15:40 compute-0 sudo[89579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:40 compute-0 sudo[89579]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:41 compute-0 sudo[89607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.client.admin.keyring.new
Dec 11 09:15:41 compute-0 sudo[89607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:41 compute-0 sudo[89607]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:41 compute-0 python3[89602]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module disable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:15:41 compute-0 sudo[89655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.client.admin.keyring.new
Dec 11 09:15:41 compute-0 sudo[89655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:41 compute-0 sudo[89655]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:41 compute-0 podman[89676]: 2025-12-11 09:15:41.176963954 +0000 UTC m=+0.045451045 container create a489b740379577465a327abbae4653c9ad418ef2f9d4f7bfc7f1075ce099eaa1 (image=quay.io/ceph/ceph:v19, name=goofy_robinson, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:15:41 compute-0 systemd[1]: Started libpod-conmon-a489b740379577465a327abbae4653c9ad418ef2f9d4f7bfc7f1075ce099eaa1.scope.
Dec 11 09:15:41 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:15:41 compute-0 sudo[89693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.client.admin.keyring.new
Dec 11 09:15:41 compute-0 sudo[89693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb9411a3acdf5ff237462dbf739d1421ff24631ce39d5d12b4c05a8fd49c1e40/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb9411a3acdf5ff237462dbf739d1421ff24631ce39d5d12b4c05a8fd49c1e40/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb9411a3acdf5ff237462dbf739d1421ff24631ce39d5d12b4c05a8fd49c1e40/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:41 compute-0 sudo[89693]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:41 compute-0 podman[89676]: 2025-12-11 09:15:41.241086206 +0000 UTC m=+0.109573327 container init a489b740379577465a327abbae4653c9ad418ef2f9d4f7bfc7f1075ce099eaa1 (image=quay.io/ceph/ceph:v19, name=goofy_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec 11 09:15:41 compute-0 podman[89676]: 2025-12-11 09:15:41.246694755 +0000 UTC m=+0.115181856 container start a489b740379577465a327abbae4653c9ad418ef2f9d4f7bfc7f1075ce099eaa1 (image=quay.io/ceph/ceph:v19, name=goofy_robinson, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:15:41 compute-0 podman[89676]: 2025-12-11 09:15:41.251599703 +0000 UTC m=+0.120086814 container attach a489b740379577465a327abbae4653c9ad418ef2f9d4f7bfc7f1075ce099eaa1 (image=quay.io/ceph/ceph:v19, name=goofy_robinson, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:15:41 compute-0 podman[89676]: 2025-12-11 09:15:41.156651044 +0000 UTC m=+0.025138175 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:15:41 compute-0 sudo[89724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Dec 11 09:15:41 compute-0 sudo[89724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:41 compute-0 sudo[89724]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:41 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring
Dec 11 09:15:41 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring
Dec 11 09:15:41 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring
Dec 11 09:15:41 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring
Dec 11 09:15:41 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 5.a scrub starts
Dec 11 09:15:41 compute-0 sudo[89750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config
Dec 11 09:15:41 compute-0 sudo[89750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:41 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 5.a scrub ok
Dec 11 09:15:41 compute-0 sudo[89750]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:41 compute-0 sudo[89794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config
Dec 11 09:15:41 compute-0 sudo[89794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:41 compute-0 sudo[89794]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:41 compute-0 sudo[89819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring.new
Dec 11 09:15:41 compute-0 sudo[89819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:41 compute-0 sudo[89819]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:41 compute-0 sudo[89844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:15:41 compute-0 sudo[89844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:41 compute-0 sudo[89844]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:41 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring
Dec 11 09:15:41 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring
Dec 11 09:15:41 compute-0 sudo[89869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring.new
Dec 11 09:15:41 compute-0 sudo[89869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:41 compute-0 sudo[89869]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:41 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module disable", "module": "dashboard"} v 0)
Dec 11 09:15:41 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Dec 11 09:15:41 compute-0 sudo[89918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring.new
Dec 11 09:15:41 compute-0 sudo[89918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:41 compute-0 sudo[89918]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:41 compute-0 sudo[89943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring.new
Dec 11 09:15:41 compute-0 sudo[89943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:41 compute-0 sudo[89943]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:41 compute-0 sudo[89968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring.new /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring
Dec 11 09:15:41 compute-0 sudo[89968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:41 compute-0 sudo[89968]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:41 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 11 09:15:41 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:41 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 11 09:15:41 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:41 compute-0 ceph-mon[74426]: 4.b scrub starts
Dec 11 09:15:41 compute-0 ceph-mon[74426]: 4.b scrub ok
Dec 11 09:15:41 compute-0 ceph-mon[74426]: pgmap v7: 193 pgs: 57 peering, 136 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:15:41 compute-0 ceph-mon[74426]: 4.d scrub starts
Dec 11 09:15:41 compute-0 ceph-mon[74426]: 4.d scrub ok
Dec 11 09:15:41 compute-0 ceph-mon[74426]: from='client.14367 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "http://192.168.122.100:3100", "target": ["mon-mgr", ""]}]: dispatch
Dec 11 09:15:41 compute-0 ceph-mon[74426]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 11 09:15:41 compute-0 ceph-mon[74426]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec 11 09:15:41 compute-0 ceph-mon[74426]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec 11 09:15:41 compute-0 ceph-mon[74426]: 3.9 scrub starts
Dec 11 09:15:41 compute-0 ceph-mon[74426]: 3.9 scrub ok
Dec 11 09:15:41 compute-0 ceph-mon[74426]: Updating compute-0:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring
Dec 11 09:15:41 compute-0 ceph-mon[74426]: Updating compute-1:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring
Dec 11 09:15:41 compute-0 ceph-mon[74426]: from='client.? 192.168.122.100:0/2447536963' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Dec 11 09:15:41 compute-0 ceph-mon[74426]: from='client.? ' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Dec 11 09:15:41 compute-0 ceph-mon[74426]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:41 compute-0 ceph-mon[74426]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:41 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 11 09:15:41 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Dec 11 09:15:41 compute-0 ceph-mgr[74715]: mgr handle_mgr_map respawning because set of enabled modules changed!
Dec 11 09:15:41 compute-0 ceph-mgr[74715]: mgr respawn  e: '/usr/bin/ceph-mgr'
Dec 11 09:15:41 compute-0 ceph-mgr[74715]: mgr respawn  0: '/usr/bin/ceph-mgr'
Dec 11 09:15:41 compute-0 ceph-mgr[74715]: mgr respawn  1: '-n'
Dec 11 09:15:41 compute-0 ceph-mgr[74715]: mgr respawn  2: 'mgr.compute-0.wwpcae'
Dec 11 09:15:41 compute-0 ceph-mgr[74715]: mgr respawn  3: '-f'
Dec 11 09:15:41 compute-0 ceph-mgr[74715]: mgr respawn  4: '--setuser'
Dec 11 09:15:41 compute-0 ceph-mgr[74715]: mgr respawn  5: 'ceph'
Dec 11 09:15:41 compute-0 ceph-mgr[74715]: mgr respawn  6: '--setgroup'
Dec 11 09:15:41 compute-0 ceph-mgr[74715]: mgr respawn  7: 'ceph'
Dec 11 09:15:41 compute-0 ceph-mgr[74715]: mgr respawn  8: '--default-log-to-file=false'
Dec 11 09:15:41 compute-0 ceph-mgr[74715]: mgr respawn  9: '--default-log-to-journald=true'
Dec 11 09:15:41 compute-0 ceph-mgr[74715]: mgr respawn  10: '--default-log-to-stderr=false'
Dec 11 09:15:41 compute-0 ceph-mgr[74715]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Dec 11 09:15:41 compute-0 ceph-mgr[74715]: mgr respawn  exe_path /proc/self/exe
Dec 11 09:15:41 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : mgrmap e19: compute-0.wwpcae(active, since 7s), standbys: compute-2.uiimcn, compute-1.unesvp
Dec 11 09:15:41 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:41 compute-0 systemd[1]: libpod-a489b740379577465a327abbae4653c9ad418ef2f9d4f7bfc7f1075ce099eaa1.scope: Deactivated successfully.
Dec 11 09:15:41 compute-0 conmon[89719]: conmon a489b740379577465a32 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a489b740379577465a327abbae4653c9ad418ef2f9d4f7bfc7f1075ce099eaa1.scope/container/memory.events
Dec 11 09:15:41 compute-0 podman[89676]: 2025-12-11 09:15:41.963724345 +0000 UTC m=+0.832211446 container died a489b740379577465a327abbae4653c9ad418ef2f9d4f7bfc7f1075ce099eaa1 (image=quay.io/ceph/ceph:v19, name=goofy_robinson, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:15:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb9411a3acdf5ff237462dbf739d1421ff24631ce39d5d12b4c05a8fd49c1e40-merged.mount: Deactivated successfully.
Dec 11 09:15:42 compute-0 podman[89676]: 2025-12-11 09:15:42.027393252 +0000 UTC m=+0.895880353 container remove a489b740379577465a327abbae4653c9ad418ef2f9d4f7bfc7f1075ce099eaa1 (image=quay.io/ceph/ceph:v19, name=goofy_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 11 09:15:42 compute-0 sshd-session[88303]: Connection closed by 192.168.122.100 port 52114
Dec 11 09:15:42 compute-0 sshd-session[88300]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 11 09:15:42 compute-0 systemd[1]: session-34.scope: Deactivated successfully.
Dec 11 09:15:42 compute-0 systemd[1]: session-34.scope: Consumed 5.196s CPU time.
Dec 11 09:15:42 compute-0 systemd[1]: libpod-conmon-a489b740379577465a327abbae4653c9ad418ef2f9d4f7bfc7f1075ce099eaa1.scope: Deactivated successfully.
Dec 11 09:15:42 compute-0 systemd-logind[792]: Session 34 logged out. Waiting for processes to exit.
Dec 11 09:15:42 compute-0 sudo[89582]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:42 compute-0 systemd-logind[792]: Removed session 34.
Dec 11 09:15:42 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: ignoring --setuser ceph since I am not root
Dec 11 09:15:42 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: ignoring --setgroup ceph since I am not root
Dec 11 09:15:42 compute-0 ceph-mgr[74715]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec 11 09:15:42 compute-0 ceph-mgr[74715]: pidfile_write: ignore empty --pid-file
Dec 11 09:15:42 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'alerts'
Dec 11 09:15:42 compute-0 sudo[90049]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouaapbkpvltxlknnmjftslrgjmtcqbar ; /usr/bin/python3'
Dec 11 09:15:42 compute-0 sudo[90049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:15:42 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:42.255+0000 7f2cac4dd140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 11 09:15:42 compute-0 ceph-mgr[74715]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 11 09:15:42 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'balancer'
Dec 11 09:15:42 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:42.358+0000 7f2cac4dd140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 11 09:15:42 compute-0 ceph-mgr[74715]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 11 09:15:42 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'cephadm'
Dec 11 09:15:42 compute-0 python3[90051]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module enable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:15:42 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Dec 11 09:15:42 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Dec 11 09:15:42 compute-0 podman[90052]: 2025-12-11 09:15:42.434743635 +0000 UTC m=+0.049000749 container create 08fbcd5690b1e30baca9f506a67c88846ca8437c61c4b9220dde03efa909ebcc (image=quay.io/ceph/ceph:v19, name=upbeat_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:15:42 compute-0 systemd[1]: Started libpod-conmon-08fbcd5690b1e30baca9f506a67c88846ca8437c61c4b9220dde03efa909ebcc.scope.
Dec 11 09:15:42 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:15:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6aa2ea732af0802976a08b67e65452034d0a3f6c423ba4b6363f0de800dbfba3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6aa2ea732af0802976a08b67e65452034d0a3f6c423ba4b6363f0de800dbfba3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6aa2ea732af0802976a08b67e65452034d0a3f6c423ba4b6363f0de800dbfba3/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:42 compute-0 podman[90052]: 2025-12-11 09:15:42.416522242 +0000 UTC m=+0.030779376 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:15:42 compute-0 podman[90052]: 2025-12-11 09:15:42.522845793 +0000 UTC m=+0.137102917 container init 08fbcd5690b1e30baca9f506a67c88846ca8437c61c4b9220dde03efa909ebcc (image=quay.io/ceph/ceph:v19, name=upbeat_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec 11 09:15:42 compute-0 podman[90052]: 2025-12-11 09:15:42.530420086 +0000 UTC m=+0.144677210 container start 08fbcd5690b1e30baca9f506a67c88846ca8437c61c4b9220dde03efa909ebcc (image=quay.io/ceph/ceph:v19, name=upbeat_pike, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec 11 09:15:42 compute-0 podman[90052]: 2025-12-11 09:15:42.535683595 +0000 UTC m=+0.149940709 container attach 08fbcd5690b1e30baca9f506a67c88846ca8437c61c4b9220dde03efa909ebcc (image=quay.io/ceph/ceph:v19, name=upbeat_pike, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:15:42 compute-0 ceph-mon[74426]: 5.a scrub starts
Dec 11 09:15:42 compute-0 ceph-mon[74426]: 5.a scrub ok
Dec 11 09:15:42 compute-0 ceph-mon[74426]: 6.7 scrub starts
Dec 11 09:15:42 compute-0 ceph-mon[74426]: 6.7 scrub ok
Dec 11 09:15:42 compute-0 ceph-mon[74426]: from='client.? ' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Dec 11 09:15:42 compute-0 ceph-mon[74426]: mgrmap e19: compute-0.wwpcae(active, since 7s), standbys: compute-2.uiimcn, compute-1.unesvp
Dec 11 09:15:42 compute-0 ceph-mon[74426]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:42 compute-0 ceph-mon[74426]: 5.e scrub starts
Dec 11 09:15:42 compute-0 ceph-mon[74426]: 5.e scrub ok
Dec 11 09:15:42 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "dashboard"} v 0)
Dec 11 09:15:42 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2032173256' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Dec 11 09:15:43 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'crash'
Dec 11 09:15:43 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:43.348+0000 7f2cac4dd140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 11 09:15:43 compute-0 ceph-mgr[74715]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 11 09:15:43 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'dashboard'
Dec 11 09:15:43 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 6.b scrub starts
Dec 11 09:15:43 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 6.b scrub ok
Dec 11 09:15:43 compute-0 ceph-mon[74426]: 6.9 scrub starts
Dec 11 09:15:43 compute-0 ceph-mon[74426]: 6.9 scrub ok
Dec 11 09:15:43 compute-0 ceph-mon[74426]: 4.5 scrub starts
Dec 11 09:15:43 compute-0 ceph-mon[74426]: 4.5 scrub ok
Dec 11 09:15:43 compute-0 ceph-mon[74426]: from='client.? 192.168.122.100:0/2032173256' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Dec 11 09:15:43 compute-0 ceph-mon[74426]: 4.1 scrub starts
Dec 11 09:15:43 compute-0 ceph-mon[74426]: 4.1 scrub ok
Dec 11 09:15:43 compute-0 ceph-mon[74426]: 6.b scrub starts
Dec 11 09:15:43 compute-0 ceph-mon[74426]: 6.b scrub ok
Dec 11 09:15:43 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2032173256' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Dec 11 09:15:43 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : mgrmap e20: compute-0.wwpcae(active, since 9s), standbys: compute-2.uiimcn, compute-1.unesvp
Dec 11 09:15:43 compute-0 systemd[1]: libpod-08fbcd5690b1e30baca9f506a67c88846ca8437c61c4b9220dde03efa909ebcc.scope: Deactivated successfully.
Dec 11 09:15:43 compute-0 podman[90052]: 2025-12-11 09:15:43.975585783 +0000 UTC m=+1.589842897 container died 08fbcd5690b1e30baca9f506a67c88846ca8437c61c4b9220dde03efa909ebcc (image=quay.io/ceph/ceph:v19, name=upbeat_pike, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec 11 09:15:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-6aa2ea732af0802976a08b67e65452034d0a3f6c423ba4b6363f0de800dbfba3-merged.mount: Deactivated successfully.
Dec 11 09:15:44 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:15:44 compute-0 podman[90052]: 2025-12-11 09:15:44.017263646 +0000 UTC m=+1.631520760 container remove 08fbcd5690b1e30baca9f506a67c88846ca8437c61c4b9220dde03efa909ebcc (image=quay.io/ceph/ceph:v19, name=upbeat_pike, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:15:44 compute-0 systemd[1]: libpod-conmon-08fbcd5690b1e30baca9f506a67c88846ca8437c61c4b9220dde03efa909ebcc.scope: Deactivated successfully.
Dec 11 09:15:44 compute-0 sudo[90049]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:44 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'devicehealth'
Dec 11 09:15:44 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:44.130+0000 7f2cac4dd140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 11 09:15:44 compute-0 ceph-mgr[74715]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 11 09:15:44 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'diskprediction_local'
Dec 11 09:15:44 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 11 09:15:44 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 11 09:15:44 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]:   from numpy import show_config as show_numpy_config
Dec 11 09:15:44 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:44.338+0000 7f2cac4dd140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 11 09:15:44 compute-0 ceph-mgr[74715]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 11 09:15:44 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'influx'
Dec 11 09:15:44 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Dec 11 09:15:44 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Dec 11 09:15:44 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:44.422+0000 7f2cac4dd140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 11 09:15:44 compute-0 ceph-mgr[74715]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 11 09:15:44 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'insights'
Dec 11 09:15:44 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'iostat'
Dec 11 09:15:44 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:44.581+0000 7f2cac4dd140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 11 09:15:44 compute-0 ceph-mgr[74715]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 11 09:15:44 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'k8sevents'
Dec 11 09:15:44 compute-0 python3[90188]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 11 09:15:44 compute-0 ceph-mon[74426]: 5.7 scrub starts
Dec 11 09:15:44 compute-0 ceph-mon[74426]: 5.7 scrub ok
Dec 11 09:15:44 compute-0 ceph-mon[74426]: from='client.? 192.168.122.100:0/2032173256' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Dec 11 09:15:44 compute-0 ceph-mon[74426]: mgrmap e20: compute-0.wwpcae(active, since 9s), standbys: compute-2.uiimcn, compute-1.unesvp
Dec 11 09:15:44 compute-0 ceph-mon[74426]: 5.1a scrub starts
Dec 11 09:15:44 compute-0 ceph-mon[74426]: 5.1a scrub ok
Dec 11 09:15:44 compute-0 ceph-mon[74426]: 4.17 scrub starts
Dec 11 09:15:44 compute-0 ceph-mon[74426]: 4.17 scrub ok
Dec 11 09:15:45 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'localpool'
Dec 11 09:15:45 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'mds_autoscaler'
Dec 11 09:15:45 compute-0 python3[90259]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765444544.6156323-37269-251177819185393/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=b1f36629bdb347469f4890c95dfdef5abc68c3ae backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:15:45 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Dec 11 09:15:45 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Dec 11 09:15:45 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'mirroring'
Dec 11 09:15:45 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'nfs'
Dec 11 09:15:45 compute-0 sudo[90307]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-loqbpymjmvpstffedxkikqljvhjvysno ; /usr/bin/python3'
Dec 11 09:15:45 compute-0 sudo[90307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:15:45 compute-0 python3[90309]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 compute-1 compute-2 '
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:15:45 compute-0 podman[90310]: 2025-12-11 09:15:45.863982289 +0000 UTC m=+0.049832965 container create e475b1185f58da00d0f4dcf39e0c4aeefcd7966c371e69699683c0b40e6ea3b4 (image=quay.io/ceph/ceph:v19, name=jolly_hertz, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:15:45 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:45.879+0000 7f2cac4dd140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 11 09:15:45 compute-0 ceph-mgr[74715]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 11 09:15:45 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'orchestrator'
Dec 11 09:15:45 compute-0 systemd[1]: Started libpod-conmon-e475b1185f58da00d0f4dcf39e0c4aeefcd7966c371e69699683c0b40e6ea3b4.scope.
Dec 11 09:15:45 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:15:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f9d50d60188ab44b7ecbb8d71f4c0f78c3cd5b14dba9f790f694a20b756bb5f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:45 compute-0 podman[90310]: 2025-12-11 09:15:45.845174037 +0000 UTC m=+0.031024743 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:15:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f9d50d60188ab44b7ecbb8d71f4c0f78c3cd5b14dba9f790f694a20b756bb5f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f9d50d60188ab44b7ecbb8d71f4c0f78c3cd5b14dba9f790f694a20b756bb5f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:45 compute-0 podman[90310]: 2025-12-11 09:15:45.961427506 +0000 UTC m=+0.147278202 container init e475b1185f58da00d0f4dcf39e0c4aeefcd7966c371e69699683c0b40e6ea3b4 (image=quay.io/ceph/ceph:v19, name=jolly_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 11 09:15:45 compute-0 ceph-mon[74426]: 5.2 scrub starts
Dec 11 09:15:45 compute-0 ceph-mon[74426]: 5.2 scrub ok
Dec 11 09:15:45 compute-0 ceph-mon[74426]: 7.5 scrub starts
Dec 11 09:15:45 compute-0 ceph-mon[74426]: 7.5 scrub ok
Dec 11 09:15:45 compute-0 ceph-mon[74426]: 4.16 scrub starts
Dec 11 09:15:45 compute-0 ceph-mon[74426]: 4.16 scrub ok
Dec 11 09:15:45 compute-0 podman[90310]: 2025-12-11 09:15:45.969401531 +0000 UTC m=+0.155252207 container start e475b1185f58da00d0f4dcf39e0c4aeefcd7966c371e69699683c0b40e6ea3b4 (image=quay.io/ceph/ceph:v19, name=jolly_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:15:45 compute-0 podman[90310]: 2025-12-11 09:15:45.973780292 +0000 UTC m=+0.159630998 container attach e475b1185f58da00d0f4dcf39e0c4aeefcd7966c371e69699683c0b40e6ea3b4 (image=quay.io/ceph/ceph:v19, name=jolly_hertz, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:15:46 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:46.140+0000 7f2cac4dd140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 11 09:15:46 compute-0 ceph-mgr[74715]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 11 09:15:46 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'osd_perf_query'
Dec 11 09:15:46 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:46.231+0000 7f2cac4dd140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 11 09:15:46 compute-0 ceph-mgr[74715]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 11 09:15:46 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'osd_support'
Dec 11 09:15:46 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:46.312+0000 7f2cac4dd140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 11 09:15:46 compute-0 ceph-mgr[74715]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 11 09:15:46 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'pg_autoscaler'
Dec 11 09:15:46 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Dec 11 09:15:46 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Dec 11 09:15:46 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:46.399+0000 7f2cac4dd140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 11 09:15:46 compute-0 ceph-mgr[74715]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 11 09:15:46 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'progress'
Dec 11 09:15:46 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:46.487+0000 7f2cac4dd140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 11 09:15:46 compute-0 ceph-mgr[74715]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 11 09:15:46 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'prometheus'
Dec 11 09:15:46 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:46.908+0000 7f2cac4dd140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 11 09:15:46 compute-0 ceph-mgr[74715]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 11 09:15:46 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'rbd_support'
Dec 11 09:15:46 compute-0 ceph-mon[74426]: 6.5 scrub starts
Dec 11 09:15:46 compute-0 ceph-mon[74426]: 6.5 scrub ok
Dec 11 09:15:46 compute-0 ceph-mon[74426]: 3.1d deep-scrub starts
Dec 11 09:15:46 compute-0 ceph-mon[74426]: 3.1d deep-scrub ok
Dec 11 09:15:46 compute-0 ceph-mon[74426]: 5.17 scrub starts
Dec 11 09:15:46 compute-0 ceph-mon[74426]: 5.17 scrub ok
Dec 11 09:15:47 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:47.036+0000 7f2cac4dd140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 11 09:15:47 compute-0 ceph-mgr[74715]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 11 09:15:47 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'restful'
Dec 11 09:15:47 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'rgw'
Dec 11 09:15:47 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 6.14 scrub starts
Dec 11 09:15:47 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 6.14 scrub ok
Dec 11 09:15:47 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:47.561+0000 7f2cac4dd140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 11 09:15:47 compute-0 ceph-mgr[74715]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 11 09:15:47 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'rook'
Dec 11 09:15:47 compute-0 ceph-mon[74426]: 3.3 scrub starts
Dec 11 09:15:47 compute-0 ceph-mon[74426]: 3.3 scrub ok
Dec 11 09:15:47 compute-0 ceph-mon[74426]: 5.4 scrub starts
Dec 11 09:15:47 compute-0 ceph-mon[74426]: 5.4 scrub ok
Dec 11 09:15:47 compute-0 ceph-mon[74426]: 6.14 scrub starts
Dec 11 09:15:47 compute-0 ceph-mon[74426]: 6.14 scrub ok
Dec 11 09:15:48 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:48.262+0000 7f2cac4dd140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 11 09:15:48 compute-0 ceph-mgr[74715]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 11 09:15:48 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'selftest'
Dec 11 09:15:48 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:48.339+0000 7f2cac4dd140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 11 09:15:48 compute-0 ceph-mgr[74715]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 11 09:15:48 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'snap_schedule'
Dec 11 09:15:48 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Dec 11 09:15:48 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Dec 11 09:15:48 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:48.437+0000 7f2cac4dd140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 11 09:15:48 compute-0 ceph-mgr[74715]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 11 09:15:48 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'stats'
Dec 11 09:15:48 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'status'
Dec 11 09:15:48 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:48.617+0000 7f2cac4dd140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec 11 09:15:48 compute-0 ceph-mgr[74715]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec 11 09:15:48 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'telegraf'
Dec 11 09:15:48 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.unesvp restarted
Dec 11 09:15:48 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.unesvp started
Dec 11 09:15:48 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:48.708+0000 7f2cac4dd140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 11 09:15:48 compute-0 ceph-mgr[74715]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 11 09:15:48 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'telemetry'
Dec 11 09:15:48 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.uiimcn restarted
Dec 11 09:15:48 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.uiimcn started
Dec 11 09:15:48 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:48.914+0000 7f2cac4dd140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 11 09:15:48 compute-0 ceph-mgr[74715]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 11 09:15:48 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'test_orchestrator'
Dec 11 09:15:48 compute-0 ceph-mon[74426]: 6.2 scrub starts
Dec 11 09:15:48 compute-0 ceph-mon[74426]: 6.2 scrub ok
Dec 11 09:15:48 compute-0 ceph-mon[74426]: 3.1a deep-scrub starts
Dec 11 09:15:48 compute-0 ceph-mon[74426]: 3.1a deep-scrub ok
Dec 11 09:15:48 compute-0 ceph-mon[74426]: 3.12 scrub starts
Dec 11 09:15:48 compute-0 ceph-mon[74426]: 3.12 scrub ok
Dec 11 09:15:48 compute-0 ceph-mon[74426]: Standby manager daemon compute-1.unesvp restarted
Dec 11 09:15:48 compute-0 ceph-mon[74426]: Standby manager daemon compute-1.unesvp started
Dec 11 09:15:48 compute-0 ceph-mon[74426]: Standby manager daemon compute-2.uiimcn restarted
Dec 11 09:15:48 compute-0 ceph-mon[74426]: Standby manager daemon compute-2.uiimcn started
Dec 11 09:15:49 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:15:49 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : mgrmap e21: compute-0.wwpcae(active, since 14s), standbys: compute-1.unesvp, compute-2.uiimcn
Dec 11 09:15:49 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:49.180+0000 7f2cac4dd140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 11 09:15:49 compute-0 ceph-mgr[74715]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 11 09:15:49 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'volumes'
Dec 11 09:15:49 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Dec 11 09:15:49 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Dec 11 09:15:49 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:49.497+0000 7f2cac4dd140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 11 09:15:49 compute-0 ceph-mgr[74715]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 11 09:15:49 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'zabbix'
Dec 11 09:15:49 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:49.586+0000 7f2cac4dd140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 11 09:15:49 compute-0 ceph-mgr[74715]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 11 09:15:49 compute-0 ceph-mon[74426]: log_channel(cluster) log [INF] : Active manager daemon compute-0.wwpcae restarted
Dec 11 09:15:49 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Dec 11 09:15:49 compute-0 ceph-mon[74426]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.wwpcae
Dec 11 09:15:49 compute-0 ceph-mgr[74715]: ms_deliver_dispatch: unhandled message 0x55941385f860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec 11 09:15:49 compute-0 ceph-mgr[74715]: mgr handle_mgr_map respawning because set of enabled modules changed!
Dec 11 09:15:49 compute-0 ceph-mgr[74715]: mgr respawn  e: '/usr/bin/ceph-mgr'
Dec 11 09:15:49 compute-0 ceph-mgr[74715]: mgr respawn  0: '/usr/bin/ceph-mgr'
Dec 11 09:15:49 compute-0 ceph-mgr[74715]: mgr respawn  1: '-n'
Dec 11 09:15:49 compute-0 ceph-mgr[74715]: mgr respawn  2: 'mgr.compute-0.wwpcae'
Dec 11 09:15:49 compute-0 ceph-mgr[74715]: mgr respawn  3: '-f'
Dec 11 09:15:49 compute-0 ceph-mgr[74715]: mgr respawn  4: '--setuser'
Dec 11 09:15:49 compute-0 ceph-mgr[74715]: mgr respawn  5: 'ceph'
Dec 11 09:15:49 compute-0 ceph-mgr[74715]: mgr respawn  6: '--setgroup'
Dec 11 09:15:49 compute-0 ceph-mgr[74715]: mgr respawn  7: 'ceph'
Dec 11 09:15:49 compute-0 ceph-mgr[74715]: mgr respawn  8: '--default-log-to-file=false'
Dec 11 09:15:49 compute-0 ceph-mgr[74715]: mgr respawn  9: '--default-log-to-journald=true'
Dec 11 09:15:49 compute-0 ceph-mgr[74715]: mgr respawn  10: '--default-log-to-stderr=false'
Dec 11 09:15:49 compute-0 ceph-mgr[74715]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Dec 11 09:15:49 compute-0 ceph-mgr[74715]: mgr respawn  exe_path /proc/self/exe
Dec 11 09:15:49 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Dec 11 09:15:49 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Dec 11 09:15:49 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : mgrmap e22: compute-0.wwpcae(active, starting, since 0.0271702s), standbys: compute-1.unesvp, compute-2.uiimcn
Dec 11 09:15:49 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: ignoring --setuser ceph since I am not root
Dec 11 09:15:49 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: ignoring --setgroup ceph since I am not root
Dec 11 09:15:49 compute-0 ceph-mgr[74715]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec 11 09:15:49 compute-0 ceph-mgr[74715]: pidfile_write: ignore empty --pid-file
Dec 11 09:15:49 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'alerts'
Dec 11 09:15:49 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:49.838+0000 7f23477f9140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 11 09:15:49 compute-0 ceph-mgr[74715]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 11 09:15:49 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'balancer'
Dec 11 09:15:49 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:49.926+0000 7f23477f9140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 11 09:15:49 compute-0 ceph-mgr[74715]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 11 09:15:49 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'cephadm'
Dec 11 09:15:50 compute-0 ceph-mon[74426]: 3.a scrub starts
Dec 11 09:15:50 compute-0 ceph-mon[74426]: 3.a scrub ok
Dec 11 09:15:50 compute-0 ceph-mon[74426]: mgrmap e21: compute-0.wwpcae(active, since 14s), standbys: compute-1.unesvp, compute-2.uiimcn
Dec 11 09:15:50 compute-0 ceph-mon[74426]: 4.8 deep-scrub starts
Dec 11 09:15:50 compute-0 ceph-mon[74426]: 4.8 deep-scrub ok
Dec 11 09:15:50 compute-0 ceph-mon[74426]: 5.14 scrub starts
Dec 11 09:15:50 compute-0 ceph-mon[74426]: 5.14 scrub ok
Dec 11 09:15:50 compute-0 ceph-mon[74426]: Active manager daemon compute-0.wwpcae restarted
Dec 11 09:15:50 compute-0 ceph-mon[74426]: Activating manager daemon compute-0.wwpcae
Dec 11 09:15:50 compute-0 ceph-mon[74426]: osdmap e42: 3 total, 3 up, 3 in
Dec 11 09:15:50 compute-0 ceph-mon[74426]: mgrmap e22: compute-0.wwpcae(active, starting, since 0.0271702s), standbys: compute-1.unesvp, compute-2.uiimcn
Dec 11 09:15:50 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 6.16 scrub starts
Dec 11 09:15:50 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 6.16 scrub ok
Dec 11 09:15:50 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'crash'
Dec 11 09:15:50 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:50.892+0000 7f23477f9140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 11 09:15:50 compute-0 ceph-mgr[74715]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 11 09:15:50 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'dashboard'
Dec 11 09:15:51 compute-0 ceph-mon[74426]: 5.1 scrub starts
Dec 11 09:15:51 compute-0 ceph-mon[74426]: 5.1 scrub ok
Dec 11 09:15:51 compute-0 ceph-mon[74426]: 7.1f scrub starts
Dec 11 09:15:51 compute-0 ceph-mon[74426]: 7.1f scrub ok
Dec 11 09:15:51 compute-0 ceph-mon[74426]: 6.16 scrub starts
Dec 11 09:15:51 compute-0 ceph-mon[74426]: 6.16 scrub ok
Dec 11 09:15:51 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 6.11 scrub starts
Dec 11 09:15:51 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 6.11 scrub ok
Dec 11 09:15:51 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'devicehealth'
Dec 11 09:15:51 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:51.609+0000 7f23477f9140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 11 09:15:51 compute-0 ceph-mgr[74715]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 11 09:15:51 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'diskprediction_local'
Dec 11 09:15:51 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 11 09:15:51 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 11 09:15:51 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]:   from numpy import show_config as show_numpy_config
Dec 11 09:15:51 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:51.798+0000 7f23477f9140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 11 09:15:51 compute-0 ceph-mgr[74715]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 11 09:15:51 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'influx'
Dec 11 09:15:51 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:51.894+0000 7f23477f9140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 11 09:15:51 compute-0 ceph-mgr[74715]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 11 09:15:51 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'insights'
Dec 11 09:15:51 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'iostat'
Dec 11 09:15:52 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:52.044+0000 7f23477f9140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 11 09:15:52 compute-0 ceph-mgr[74715]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 11 09:15:52 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'k8sevents'
Dec 11 09:15:52 compute-0 ceph-mon[74426]: 3.5 scrub starts
Dec 11 09:15:52 compute-0 ceph-mon[74426]: 3.5 scrub ok
Dec 11 09:15:52 compute-0 ceph-mon[74426]: 4.9 scrub starts
Dec 11 09:15:52 compute-0 ceph-mon[74426]: 4.9 scrub ok
Dec 11 09:15:52 compute-0 ceph-mon[74426]: 6.11 scrub starts
Dec 11 09:15:52 compute-0 ceph-mon[74426]: 6.11 scrub ok
Dec 11 09:15:52 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Dec 11 09:15:52 compute-0 systemd[75800]: Activating special unit Exit the Session...
Dec 11 09:15:52 compute-0 systemd[75800]: Stopped target Main User Target.
Dec 11 09:15:52 compute-0 systemd[75800]: Stopped target Basic System.
Dec 11 09:15:52 compute-0 systemd[75800]: Stopped target Paths.
Dec 11 09:15:52 compute-0 systemd[75800]: Stopped target Sockets.
Dec 11 09:15:52 compute-0 systemd[75800]: Stopped target Timers.
Dec 11 09:15:52 compute-0 systemd[75800]: Stopped Mark boot as successful after the user session has run 2 minutes.
Dec 11 09:15:52 compute-0 systemd[75800]: Stopped Daily Cleanup of User's Temporary Directories.
Dec 11 09:15:52 compute-0 systemd[75800]: Closed D-Bus User Message Bus Socket.
Dec 11 09:15:52 compute-0 systemd[75800]: Stopped Create User's Volatile Files and Directories.
Dec 11 09:15:52 compute-0 systemd[75800]: Removed slice User Application Slice.
Dec 11 09:15:52 compute-0 systemd[75800]: Reached target Shutdown.
Dec 11 09:15:52 compute-0 systemd[75800]: Finished Exit the Session.
Dec 11 09:15:52 compute-0 systemd[75800]: Reached target Exit the Session.
Dec 11 09:15:52 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Dec 11 09:15:52 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Dec 11 09:15:52 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Dec 11 09:15:52 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Dec 11 09:15:52 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Dec 11 09:15:52 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Dec 11 09:15:52 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Dec 11 09:15:52 compute-0 systemd[1]: user-42477.slice: Consumed 29.749s CPU time.
Dec 11 09:15:52 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Dec 11 09:15:52 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Dec 11 09:15:52 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'localpool'
Dec 11 09:15:52 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'mds_autoscaler'
Dec 11 09:15:52 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'mirroring'
Dec 11 09:15:52 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'nfs'
Dec 11 09:15:53 compute-0 ceph-mon[74426]: 5.f scrub starts
Dec 11 09:15:53 compute-0 ceph-mon[74426]: 5.f scrub ok
Dec 11 09:15:53 compute-0 ceph-mon[74426]: 7.11 scrub starts
Dec 11 09:15:53 compute-0 ceph-mon[74426]: 7.11 scrub ok
Dec 11 09:15:53 compute-0 ceph-mon[74426]: 4.12 scrub starts
Dec 11 09:15:53 compute-0 ceph-mon[74426]: 4.12 scrub ok
Dec 11 09:15:53 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:53.186+0000 7f23477f9140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 11 09:15:53 compute-0 ceph-mgr[74715]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 11 09:15:53 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'orchestrator'
Dec 11 09:15:53 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 6.10 scrub starts
Dec 11 09:15:53 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 6.10 scrub ok
Dec 11 09:15:53 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:53.433+0000 7f23477f9140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 11 09:15:53 compute-0 ceph-mgr[74715]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 11 09:15:53 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'osd_perf_query'
Dec 11 09:15:53 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:53.517+0000 7f23477f9140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 11 09:15:53 compute-0 ceph-mgr[74715]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 11 09:15:53 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'osd_support'
Dec 11 09:15:53 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:53.594+0000 7f23477f9140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 11 09:15:53 compute-0 ceph-mgr[74715]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 11 09:15:53 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'pg_autoscaler'
Dec 11 09:15:53 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:53.688+0000 7f23477f9140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 11 09:15:53 compute-0 ceph-mgr[74715]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 11 09:15:53 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'progress'
Dec 11 09:15:53 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:53.776+0000 7f23477f9140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 11 09:15:53 compute-0 ceph-mgr[74715]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 11 09:15:53 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'prometheus'
Dec 11 09:15:54 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:15:54 compute-0 ceph-mon[74426]: 4.e scrub starts
Dec 11 09:15:54 compute-0 ceph-mon[74426]: 4.e scrub ok
Dec 11 09:15:54 compute-0 ceph-mon[74426]: 3.15 scrub starts
Dec 11 09:15:54 compute-0 ceph-mon[74426]: 3.15 scrub ok
Dec 11 09:15:54 compute-0 ceph-mon[74426]: 6.10 scrub starts
Dec 11 09:15:54 compute-0 ceph-mon[74426]: 6.10 scrub ok
Dec 11 09:15:54 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:54.179+0000 7f23477f9140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 11 09:15:54 compute-0 ceph-mgr[74715]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 11 09:15:54 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'rbd_support'
Dec 11 09:15:54 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:54.291+0000 7f23477f9140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 11 09:15:54 compute-0 ceph-mgr[74715]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 11 09:15:54 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'restful'
Dec 11 09:15:54 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 6.13 deep-scrub starts
Dec 11 09:15:54 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 6.13 deep-scrub ok
Dec 11 09:15:54 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'rgw'
Dec 11 09:15:54 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.unesvp restarted
Dec 11 09:15:54 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.unesvp started
Dec 11 09:15:54 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:54.813+0000 7f23477f9140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 11 09:15:54 compute-0 ceph-mgr[74715]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 11 09:15:54 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'rook'
Dec 11 09:15:55 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : mgrmap e23: compute-0.wwpcae(active, starting, since 5s), standbys: compute-2.uiimcn, compute-1.unesvp
Dec 11 09:15:55 compute-0 ceph-mon[74426]: 6.e scrub starts
Dec 11 09:15:55 compute-0 ceph-mon[74426]: 6.e scrub ok
Dec 11 09:15:55 compute-0 ceph-mon[74426]: 7.16 scrub starts
Dec 11 09:15:55 compute-0 ceph-mon[74426]: 7.16 scrub ok
Dec 11 09:15:55 compute-0 ceph-mon[74426]: 6.13 deep-scrub starts
Dec 11 09:15:55 compute-0 ceph-mon[74426]: 6.13 deep-scrub ok
Dec 11 09:15:55 compute-0 ceph-mon[74426]: Standby manager daemon compute-1.unesvp restarted
Dec 11 09:15:55 compute-0 ceph-mon[74426]: Standby manager daemon compute-1.unesvp started
Dec 11 09:15:55 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Dec 11 09:15:55 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Dec 11 09:15:55 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.uiimcn restarted
Dec 11 09:15:55 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.uiimcn started
Dec 11 09:15:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:55.524+0000 7f23477f9140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 11 09:15:55 compute-0 ceph-mgr[74715]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 11 09:15:55 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'selftest'
Dec 11 09:15:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:55.609+0000 7f23477f9140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 11 09:15:55 compute-0 ceph-mgr[74715]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 11 09:15:55 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'snap_schedule'
Dec 11 09:15:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:55.709+0000 7f23477f9140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 11 09:15:55 compute-0 ceph-mgr[74715]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 11 09:15:55 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'stats'
Dec 11 09:15:55 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'status'
Dec 11 09:15:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:55.893+0000 7f23477f9140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec 11 09:15:55 compute-0 ceph-mgr[74715]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec 11 09:15:55 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'telegraf'
Dec 11 09:15:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:55.975+0000 7f23477f9140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 11 09:15:55 compute-0 ceph-mgr[74715]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 11 09:15:55 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'telemetry'
Dec 11 09:15:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:56.145+0000 7f23477f9140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 11 09:15:56 compute-0 ceph-mgr[74715]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 11 09:15:56 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'test_orchestrator'
Dec 11 09:15:56 compute-0 ceph-mon[74426]: 4.c deep-scrub starts
Dec 11 09:15:56 compute-0 ceph-mon[74426]: 4.c deep-scrub ok
Dec 11 09:15:56 compute-0 ceph-mon[74426]: mgrmap e23: compute-0.wwpcae(active, starting, since 5s), standbys: compute-2.uiimcn, compute-1.unesvp
Dec 11 09:15:56 compute-0 ceph-mon[74426]: 7.14 scrub starts
Dec 11 09:15:56 compute-0 ceph-mon[74426]: 7.14 scrub ok
Dec 11 09:15:56 compute-0 ceph-mon[74426]: 5.1e scrub starts
Dec 11 09:15:56 compute-0 ceph-mon[74426]: 5.1e scrub ok
Dec 11 09:15:56 compute-0 ceph-mon[74426]: Standby manager daemon compute-2.uiimcn restarted
Dec 11 09:15:56 compute-0 ceph-mon[74426]: Standby manager daemon compute-2.uiimcn started
Dec 11 09:15:56 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : mgrmap e24: compute-0.wwpcae(active, starting, since 6s), standbys: compute-1.unesvp, compute-2.uiimcn
Dec 11 09:15:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:56.416+0000 7f23477f9140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 11 09:15:56 compute-0 ceph-mgr[74715]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 11 09:15:56 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'volumes'
Dec 11 09:15:56 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 6.1d scrub starts
Dec 11 09:15:56 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 6.1d scrub ok
Dec 11 09:15:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:56.734+0000 7f23477f9140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 11 09:15:56 compute-0 ceph-mgr[74715]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 11 09:15:56 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'zabbix'
Dec 11 09:15:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:15:56.823+0000 7f23477f9140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 11 09:15:56 compute-0 ceph-mgr[74715]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 11 09:15:56 compute-0 ceph-mon[74426]: log_channel(cluster) log [INF] : Active manager daemon compute-0.wwpcae restarted
Dec 11 09:15:56 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Dec 11 09:15:56 compute-0 ceph-mon[74426]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.wwpcae
Dec 11 09:15:56 compute-0 ceph-mgr[74715]: ms_deliver_dispatch: unhandled message 0x557138463860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec 11 09:15:56 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Dec 11 09:15:56 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Dec 11 09:15:56 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : mgrmap e25: compute-0.wwpcae(active, starting, since 0.0371803s), standbys: compute-1.unesvp, compute-2.uiimcn
Dec 11 09:15:56 compute-0 ceph-mgr[74715]: mgr handle_mgr_map Activating!
Dec 11 09:15:56 compute-0 ceph-mgr[74715]: mgr handle_mgr_map I am now activating
Dec 11 09:15:56 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec 11 09:15:56 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 11 09:15:56 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 11 09:15:56 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 11 09:15:56 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec 11 09:15:56 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 11 09:15:56 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.wwpcae", "id": "compute-0.wwpcae"} v 0)
Dec 11 09:15:56 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mgr metadata", "who": "compute-0.wwpcae", "id": "compute-0.wwpcae"}]: dispatch
Dec 11 09:15:56 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.unesvp", "id": "compute-1.unesvp"} v 0)
Dec 11 09:15:56 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mgr metadata", "who": "compute-1.unesvp", "id": "compute-1.unesvp"}]: dispatch
Dec 11 09:15:56 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.uiimcn", "id": "compute-2.uiimcn"} v 0)
Dec 11 09:15:56 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mgr metadata", "who": "compute-2.uiimcn", "id": "compute-2.uiimcn"}]: dispatch
Dec 11 09:15:56 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 11 09:15:56 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 11 09:15:56 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 11 09:15:56 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 11 09:15:56 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 11 09:15:56 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 11 09:15:56 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Dec 11 09:15:56 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec 11 09:15:56 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).mds e1 all = 1
Dec 11 09:15:56 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec 11 09:15:56 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 11 09:15:56 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Dec 11 09:15:56 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec 11 09:15:56 compute-0 ceph-mgr[74715]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:15:56 compute-0 ceph-mgr[74715]: mgr load Constructed class from module: balancer
Dec 11 09:15:56 compute-0 ceph-mgr[74715]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:15:56 compute-0 ceph-mon[74426]: log_channel(cluster) log [INF] : Manager daemon compute-0.wwpcae is now available
Dec 11 09:15:56 compute-0 ceph-mgr[74715]: [balancer INFO root] Starting
Dec 11 09:15:56 compute-0 ceph-mgr[74715]: [balancer INFO root] Optimize plan auto_2025-12-11_09:15:56
Dec 11 09:15:56 compute-0 ceph-mgr[74715]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 11 09:15:56 compute-0 ceph-mgr[74715]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Dec 11 09:15:56 compute-0 ceph-mgr[74715]: mgr load Constructed class from module: cephadm
Dec 11 09:15:56 compute-0 ceph-mgr[74715]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:15:56 compute-0 ceph-mgr[74715]: mgr load Constructed class from module: crash
Dec 11 09:15:56 compute-0 ceph-mgr[74715]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:15:56 compute-0 ceph-mgr[74715]: mgr load Constructed class from module: dashboard
Dec 11 09:15:56 compute-0 ceph-mgr[74715]: [dashboard INFO access_control] Loading user roles DB version=2
Dec 11 09:15:56 compute-0 ceph-mgr[74715]: [dashboard INFO sso] Loading SSO DB version=1
Dec 11 09:15:56 compute-0 ceph-mgr[74715]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Dec 11 09:15:56 compute-0 ceph-mgr[74715]: [dashboard INFO root] Configured CherryPy, starting engine...
Dec 11 09:15:56 compute-0 ceph-mgr[74715]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:15:56 compute-0 ceph-mgr[74715]: mgr load Constructed class from module: devicehealth
Dec 11 09:15:56 compute-0 ceph-mgr[74715]: [devicehealth INFO root] Starting
Dec 11 09:15:56 compute-0 ceph-mgr[74715]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:15:56 compute-0 ceph-mgr[74715]: mgr load Constructed class from module: iostat
Dec 11 09:15:56 compute-0 ceph-mgr[74715]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:15:56 compute-0 ceph-mgr[74715]: mgr load Constructed class from module: nfs
Dec 11 09:15:56 compute-0 ceph-mgr[74715]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:15:56 compute-0 ceph-mgr[74715]: mgr load Constructed class from module: orchestrator
Dec 11 09:15:56 compute-0 ceph-mgr[74715]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:15:56 compute-0 ceph-mgr[74715]: mgr load Constructed class from module: pg_autoscaler
Dec 11 09:15:56 compute-0 ceph-mgr[74715]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:15:56 compute-0 ceph-mgr[74715]: mgr load Constructed class from module: progress
Dec 11 09:15:56 compute-0 ceph-mgr[74715]: [progress INFO root] Loading...
Dec 11 09:15:56 compute-0 ceph-mgr[74715]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7f22cb39d940>, <progress.module.GhostEvent object at 0x7f22cb39dd90>, <progress.module.GhostEvent object at 0x7f22cb39ddc0>, <progress.module.GhostEvent object at 0x7f22cb39ddf0>, <progress.module.GhostEvent object at 0x7f22cb39de20>, <progress.module.GhostEvent object at 0x7f22cb39de50>, <progress.module.GhostEvent object at 0x7f22cb39de80>, <progress.module.GhostEvent object at 0x7f22cb39deb0>, <progress.module.GhostEvent object at 0x7f22cb39dee0>, <progress.module.GhostEvent object at 0x7f22cb39df10>, <progress.module.GhostEvent object at 0x7f22cb39df40>, <progress.module.GhostEvent object at 0x7f22cb39df70>, <progress.module.GhostEvent object at 0x7f22cb39dfa0>] historic events
Dec 11 09:15:56 compute-0 ceph-mgr[74715]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:15:56 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] _maybe_adjust
Dec 11 09:15:56 compute-0 ceph-mgr[74715]: [progress INFO root] Loaded OSDMap, ready.
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [rbd_support INFO root] recovery thread starting
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [rbd_support INFO root] starting setup
Dec 11 09:15:57 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wwpcae/mirror_snapshot_schedule"} v 0)
Dec 11 09:15:57 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wwpcae/mirror_snapshot_schedule"}]: dispatch
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: mgr load Constructed class from module: rbd_support
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: mgr load Constructed class from module: restful
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [restful INFO root] server_addr: :: server_port: 8003
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: mgr load Constructed class from module: status
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [restful WARNING root] server not running: no certificate configured
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: mgr load Constructed class from module: telemetry
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [rbd_support INFO root] PerfHandler: starting
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_task_task: vms, start_after=
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_task_task: volumes, start_after=
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_task_task: backups, start_after=
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: mgr load Constructed class from module: volumes
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_task_task: images, start_after=
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [rbd_support INFO root] TaskHandler: starting
Dec 11 09:15:57 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wwpcae/trash_purge_schedule"} v 0)
Dec 11 09:15:57 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wwpcae/trash_purge_schedule"}]: dispatch
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [rbd_support INFO root] setup complete
Dec 11 09:15:57 compute-0 ceph-mon[74426]: 6.d scrub starts
Dec 11 09:15:57 compute-0 ceph-mon[74426]: 6.d scrub ok
Dec 11 09:15:57 compute-0 ceph-mon[74426]: mgrmap e24: compute-0.wwpcae(active, starting, since 6s), standbys: compute-1.unesvp, compute-2.uiimcn
Dec 11 09:15:57 compute-0 ceph-mon[74426]: 5.12 scrub starts
Dec 11 09:15:57 compute-0 ceph-mon[74426]: 5.12 scrub ok
Dec 11 09:15:57 compute-0 ceph-mon[74426]: 6.1d scrub starts
Dec 11 09:15:57 compute-0 ceph-mon[74426]: 6.1d scrub ok
Dec 11 09:15:57 compute-0 ceph-mon[74426]: Active manager daemon compute-0.wwpcae restarted
Dec 11 09:15:57 compute-0 ceph-mon[74426]: Activating manager daemon compute-0.wwpcae
Dec 11 09:15:57 compute-0 ceph-mon[74426]: osdmap e43: 3 total, 3 up, 3 in
Dec 11 09:15:57 compute-0 ceph-mon[74426]: mgrmap e25: compute-0.wwpcae(active, starting, since 0.0371803s), standbys: compute-1.unesvp, compute-2.uiimcn
Dec 11 09:15:57 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 11 09:15:57 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 11 09:15:57 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 11 09:15:57 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mgr metadata", "who": "compute-0.wwpcae", "id": "compute-0.wwpcae"}]: dispatch
Dec 11 09:15:57 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mgr metadata", "who": "compute-1.unesvp", "id": "compute-1.unesvp"}]: dispatch
Dec 11 09:15:57 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mgr metadata", "who": "compute-2.uiimcn", "id": "compute-2.uiimcn"}]: dispatch
Dec 11 09:15:57 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 11 09:15:57 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 11 09:15:57 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 11 09:15:57 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec 11 09:15:57 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 11 09:15:57 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec 11 09:15:57 compute-0 ceph-mon[74426]: Manager daemon compute-0.wwpcae is now available
Dec 11 09:15:57 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wwpcae/mirror_snapshot_schedule"}]: dispatch
Dec 11 09:15:57 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wwpcae/trash_purge_schedule"}]: dispatch
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Dec 11 09:15:57 compute-0 sshd-session[90498]: Accepted publickey for ceph-admin from 192.168.122.100 port 51840 ssh2: RSA SHA256:VeIx2NZka5hi0niQjHCLie+FE2InrWghbFhBMbpMPGo
Dec 11 09:15:57 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 3.19 deep-scrub starts
Dec 11 09:15:57 compute-0 systemd-logind[792]: New session 35 of user ceph-admin.
Dec 11 09:15:57 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Dec 11 09:15:57 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 3.19 deep-scrub ok
Dec 11 09:15:57 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Dec 11 09:15:57 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Dec 11 09:15:57 compute-0 systemd[1]: Starting User Manager for UID 42477...
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.module] Engine started.
Dec 11 09:15:57 compute-0 systemd[90513]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 11 09:15:57 compute-0 systemd[90513]: Queued start job for default target Main User Target.
Dec 11 09:15:57 compute-0 systemd[90513]: Created slice User Application Slice.
Dec 11 09:15:57 compute-0 systemd[90513]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 11 09:15:57 compute-0 systemd[90513]: Started Daily Cleanup of User's Temporary Directories.
Dec 11 09:15:57 compute-0 systemd[90513]: Reached target Paths.
Dec 11 09:15:57 compute-0 systemd[90513]: Reached target Timers.
Dec 11 09:15:57 compute-0 systemd[90513]: Starting D-Bus User Message Bus Socket...
Dec 11 09:15:57 compute-0 systemd[90513]: Starting Create User's Volatile Files and Directories...
Dec 11 09:15:57 compute-0 systemd[90513]: Listening on D-Bus User Message Bus Socket.
Dec 11 09:15:57 compute-0 systemd[90513]: Reached target Sockets.
Dec 11 09:15:57 compute-0 systemd[90513]: Finished Create User's Volatile Files and Directories.
Dec 11 09:15:57 compute-0 systemd[90513]: Reached target Basic System.
Dec 11 09:15:57 compute-0 systemd[90513]: Reached target Main User Target.
Dec 11 09:15:57 compute-0 systemd[90513]: Startup finished in 125ms.
Dec 11 09:15:57 compute-0 systemd[1]: Started User Manager for UID 42477.
Dec 11 09:15:57 compute-0 systemd[1]: Started Session 35 of User ceph-admin.
Dec 11 09:15:57 compute-0 sshd-session[90498]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 11 09:15:57 compute-0 sudo[90532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:15:57 compute-0 sudo[90532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:57 compute-0 sudo[90532]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:57 compute-0 sudo[90557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Dec 11 09:15:57 compute-0 sudo[90557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:57 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : mgrmap e26: compute-0.wwpcae(active, since 1.06228s), standbys: compute-1.unesvp, compute-2.uiimcn
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: log_channel(audit) log [DBG] : from='client.14394 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v3: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:15:57 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0)
Dec 11 09:15:57 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Dec 11 09:15:57 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0)
Dec 11 09:15:57 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Dec 11 09:15:57 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0)
Dec 11 09:15:57 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Dec 11 09:15:57 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Dec 11 09:15:57 compute-0 ceph-mon[74426]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec 11 09:15:57 compute-0 ceph-mon[74426]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Dec 11 09:15:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mon-compute-0[74422]: 2025-12-11T09:15:57.913+0000 7f034efb9640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec 11 09:15:57 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Dec 11 09:15:57 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).mds e2 new map
Dec 11 09:15:57 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).mds e2 print_map
                                           e2
                                           btime 2025-12-11T09:15:57:915202+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-11T09:15:57.915143+0000
                                           modified        2025-12-11T09:15:57.915143+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                            
                                            
Dec 11 09:15:57 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Dec 11 09:15:57 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Dec 11 09:15:57 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Dec 11 09:15:57 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec 11 09:15:57 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:57 compute-0 ceph-mgr[74715]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Dec 11 09:15:57 compute-0 systemd[1]: libpod-e475b1185f58da00d0f4dcf39e0c4aeefcd7966c371e69699683c0b40e6ea3b4.scope: Deactivated successfully.
Dec 11 09:15:57 compute-0 podman[90310]: 2025-12-11 09:15:57.977960713 +0000 UTC m=+12.163811389 container died e475b1185f58da00d0f4dcf39e0c4aeefcd7966c371e69699683c0b40e6ea3b4 (image=quay.io/ceph/ceph:v19, name=jolly_hertz, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 11 09:15:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f9d50d60188ab44b7ecbb8d71f4c0f78c3cd5b14dba9f790f694a20b756bb5f-merged.mount: Deactivated successfully.
Dec 11 09:15:58 compute-0 podman[90310]: 2025-12-11 09:15:58.032263613 +0000 UTC m=+12.218114289 container remove e475b1185f58da00d0f4dcf39e0c4aeefcd7966c371e69699683c0b40e6ea3b4 (image=quay.io/ceph/ceph:v19, name=jolly_hertz, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec 11 09:15:58 compute-0 systemd[1]: libpod-conmon-e475b1185f58da00d0f4dcf39e0c4aeefcd7966c371e69699683c0b40e6ea3b4.scope: Deactivated successfully.
Dec 11 09:15:58 compute-0 sudo[90307]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:58 compute-0 ceph-mon[74426]: 5.1b scrub starts
Dec 11 09:15:58 compute-0 ceph-mon[74426]: 5.1b scrub ok
Dec 11 09:15:58 compute-0 ceph-mon[74426]: 2.15 scrub starts
Dec 11 09:15:58 compute-0 ceph-mon[74426]: 2.15 scrub ok
Dec 11 09:15:58 compute-0 ceph-mon[74426]: 3.19 deep-scrub starts
Dec 11 09:15:58 compute-0 ceph-mon[74426]: 3.19 deep-scrub ok
Dec 11 09:15:58 compute-0 ceph-mon[74426]: mgrmap e26: compute-0.wwpcae(active, since 1.06228s), standbys: compute-1.unesvp, compute-2.uiimcn
Dec 11 09:15:58 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Dec 11 09:15:58 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Dec 11 09:15:58 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Dec 11 09:15:58 compute-0 ceph-mon[74426]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec 11 09:15:58 compute-0 ceph-mon[74426]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Dec 11 09:15:58 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Dec 11 09:15:58 compute-0 ceph-mon[74426]: osdmap e44: 3 total, 3 up, 3 in
Dec 11 09:15:58 compute-0 ceph-mon[74426]: fsmap cephfs:0
Dec 11 09:15:58 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:58 compute-0 sudo[90657]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qljpzacvauxnfsjkvddviwwngctmrrhh ; /usr/bin/python3'
Dec 11 09:15:58 compute-0 sudo[90657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:15:58 compute-0 python3[90668]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:15:58 compute-0 podman[90690]: 2025-12-11 09:15:58.425241894 +0000 UTC m=+0.060609528 container exec 9c08b5b1828392d2cc014af7ec9c415c9c915d4b6dd9798da9403e6929851bd4 (image=quay.io/ceph/ceph:v19, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid)
Dec 11 09:15:58 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Dec 11 09:15:58 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Dec 11 09:15:58 compute-0 podman[90710]: 2025-12-11 09:15:58.480519594 +0000 UTC m=+0.041431307 container create 9669c6f2647e8b75b9bd1d0adee4c71651dbd0e2b469869e6a946e2d99fd7b7f (image=quay.io/ceph/ceph:v19, name=thirsty_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 11 09:15:58 compute-0 systemd[1]: Started libpod-conmon-9669c6f2647e8b75b9bd1d0adee4c71651dbd0e2b469869e6a946e2d99fd7b7f.scope.
Dec 11 09:15:58 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:15:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cf67063417a67065a2a71a70c78ec8920312cf93ef7ce313949be615a746cd8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cf67063417a67065a2a71a70c78ec8920312cf93ef7ce313949be615a746cd8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cf67063417a67065a2a71a70c78ec8920312cf93ef7ce313949be615a746cd8/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:58 compute-0 podman[90710]: 2025-12-11 09:15:58.462036986 +0000 UTC m=+0.022948729 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:15:58 compute-0 podman[90710]: 2025-12-11 09:15:58.567239299 +0000 UTC m=+0.128151032 container init 9669c6f2647e8b75b9bd1d0adee4c71651dbd0e2b469869e6a946e2d99fd7b7f (image=quay.io/ceph/ceph:v19, name=thirsty_ptolemy, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec 11 09:15:58 compute-0 podman[90690]: 2025-12-11 09:15:58.578855813 +0000 UTC m=+0.214223437 container exec_died 9c08b5b1828392d2cc014af7ec9c415c9c915d4b6dd9798da9403e6929851bd4 (image=quay.io/ceph/ceph:v19, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mon-compute-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:15:58 compute-0 podman[90710]: 2025-12-11 09:15:58.582403204 +0000 UTC m=+0.143314927 container start 9669c6f2647e8b75b9bd1d0adee4c71651dbd0e2b469869e6a946e2d99fd7b7f (image=quay.io/ceph/ceph:v19, name=thirsty_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec 11 09:15:58 compute-0 podman[90710]: 2025-12-11 09:15:58.591492219 +0000 UTC m=+0.152403932 container attach 9669c6f2647e8b75b9bd1d0adee4c71651dbd0e2b469869e6a946e2d99fd7b7f (image=quay.io/ceph/ceph:v19, name=thirsty_ptolemy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec 11 09:15:58 compute-0 ceph-mgr[74715]: [cephadm INFO cherrypy.error] [11/Dec/2025:09:15:58] ENGINE Bus STARTING
Dec 11 09:15:58 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : [11/Dec/2025:09:15:58] ENGINE Bus STARTING
Dec 11 09:15:58 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 11 09:15:58 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:58 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 11 09:15:58 compute-0 ceph-mgr[74715]: [cephadm INFO cherrypy.error] [11/Dec/2025:09:15:58] ENGINE Serving on https://192.168.122.100:7150
Dec 11 09:15:58 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : [11/Dec/2025:09:15:58] ENGINE Serving on https://192.168.122.100:7150
Dec 11 09:15:58 compute-0 ceph-mgr[74715]: [cephadm INFO cherrypy.error] [11/Dec/2025:09:15:58] ENGINE Client ('192.168.122.100', 45662) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 11 09:15:58 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : [11/Dec/2025:09:15:58] ENGINE Client ('192.168.122.100', 45662) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 11 09:15:58 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:58 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v5: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:15:58 compute-0 ceph-mgr[74715]: log_channel(audit) log [DBG] : from='client.14424 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 11 09:15:58 compute-0 ceph-mgr[74715]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Dec 11 09:15:58 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Dec 11 09:15:58 compute-0 ceph-mgr[74715]: [cephadm INFO cherrypy.error] [11/Dec/2025:09:15:58] ENGINE Serving on http://192.168.122.100:8765
Dec 11 09:15:58 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : [11/Dec/2025:09:15:58] ENGINE Serving on http://192.168.122.100:8765
Dec 11 09:15:58 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec 11 09:15:58 compute-0 ceph-mgr[74715]: [cephadm INFO cherrypy.error] [11/Dec/2025:09:15:58] ENGINE Bus STARTED
Dec 11 09:15:58 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : [11/Dec/2025:09:15:58] ENGINE Bus STARTED
Dec 11 09:15:58 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:58 compute-0 thirsty_ptolemy[90724]: Scheduled mds.cephfs update...
Dec 11 09:15:59 compute-0 systemd[1]: libpod-9669c6f2647e8b75b9bd1d0adee4c71651dbd0e2b469869e6a946e2d99fd7b7f.scope: Deactivated successfully.
Dec 11 09:15:59 compute-0 podman[90710]: 2025-12-11 09:15:59.005407126 +0000 UTC m=+0.566318859 container died 9669c6f2647e8b75b9bd1d0adee4c71651dbd0e2b469869e6a946e2d99fd7b7f (image=quay.io/ceph/ceph:v19, name=thirsty_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:15:59 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:15:59 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 11 09:15:59 compute-0 ceph-mgr[74715]: [devicehealth INFO root] Check health
Dec 11 09:15:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-6cf67063417a67065a2a71a70c78ec8920312cf93ef7ce313949be615a746cd8-merged.mount: Deactivated successfully.
Dec 11 09:15:59 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:59 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 11 09:15:59 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:59 compute-0 sudo[90557]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:59 compute-0 podman[90710]: 2025-12-11 09:15:59.068547822 +0000 UTC m=+0.629459535 container remove 9669c6f2647e8b75b9bd1d0adee4c71651dbd0e2b469869e6a946e2d99fd7b7f (image=quay.io/ceph/ceph:v19, name=thirsty_ptolemy, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:15:59 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 11 09:15:59 compute-0 systemd[1]: libpod-conmon-9669c6f2647e8b75b9bd1d0adee4c71651dbd0e2b469869e6a946e2d99fd7b7f.scope: Deactivated successfully.
Dec 11 09:15:59 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:59 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 11 09:15:59 compute-0 sudo[90657]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:59 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:59 compute-0 sudo[90862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:15:59 compute-0 sudo[90862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:59 compute-0 sudo[90862]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:59 compute-0 ceph-mon[74426]: 6.19 deep-scrub starts
Dec 11 09:15:59 compute-0 ceph-mon[74426]: 6.19 deep-scrub ok
Dec 11 09:15:59 compute-0 ceph-mon[74426]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Dec 11 09:15:59 compute-0 ceph-mon[74426]: 2.d scrub starts
Dec 11 09:15:59 compute-0 ceph-mon[74426]: 2.d scrub ok
Dec 11 09:15:59 compute-0 ceph-mon[74426]: 2.19 scrub starts
Dec 11 09:15:59 compute-0 ceph-mon[74426]: 2.19 scrub ok
Dec 11 09:15:59 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:59 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:59 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:59 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:59 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:59 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:59 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:59 compute-0 sudo[90933]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzahpizczspprotamycbcpouzcflpepg ; /usr/bin/python3'
Dec 11 09:15:59 compute-0 sudo[90887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 11 09:15:59 compute-0 sudo[90933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:15:59 compute-0 sudo[90887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:59 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : mgrmap e27: compute-0.wwpcae(active, since 2s), standbys: compute-1.unesvp, compute-2.uiimcn
Dec 11 09:15:59 compute-0 python3[90936]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   nfs cluster create cephfs --ingress --virtual-ip=192.168.122.2/24 --ingress-mode=haproxy-protocol '--placement=compute-0 compute-1 compute-2 '
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:15:59 compute-0 podman[90938]: 2025-12-11 09:15:59.481149879 +0000 UTC m=+0.045378712 container create 1e14c7cbc75d81a361e8814e2f40a1c6afa1bcc2a5c71b361124edf2995711ae (image=quay.io/ceph/ceph:v19, name=gracious_taussig, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:15:59 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Dec 11 09:15:59 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Dec 11 09:15:59 compute-0 systemd[1]: Started libpod-conmon-1e14c7cbc75d81a361e8814e2f40a1c6afa1bcc2a5c71b361124edf2995711ae.scope.
Dec 11 09:15:59 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:15:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/002836bb8f0a990f771ef841ef957ed8a24a542abcc61d3f940a0d6d7e24e65b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/002836bb8f0a990f771ef841ef957ed8a24a542abcc61d3f940a0d6d7e24e65b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/002836bb8f0a990f771ef841ef957ed8a24a542abcc61d3f940a0d6d7e24e65b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:59 compute-0 podman[90938]: 2025-12-11 09:15:59.460390199 +0000 UTC m=+0.024619052 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:15:59 compute-0 podman[90938]: 2025-12-11 09:15:59.569061511 +0000 UTC m=+0.133290364 container init 1e14c7cbc75d81a361e8814e2f40a1c6afa1bcc2a5c71b361124edf2995711ae (image=quay.io/ceph/ceph:v19, name=gracious_taussig, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 11 09:15:59 compute-0 podman[90938]: 2025-12-11 09:15:59.577701152 +0000 UTC m=+0.141929985 container start 1e14c7cbc75d81a361e8814e2f40a1c6afa1bcc2a5c71b361124edf2995711ae (image=quay.io/ceph/ceph:v19, name=gracious_taussig, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec 11 09:15:59 compute-0 podman[90938]: 2025-12-11 09:15:59.582421989 +0000 UTC m=+0.146650852 container attach 1e14c7cbc75d81a361e8814e2f40a1c6afa1bcc2a5c71b361124edf2995711ae (image=quay.io/ceph/ceph:v19, name=gracious_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:15:59 compute-0 sudo[90887]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:59 compute-0 sudo[91007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:15:59 compute-0 sudo[91007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:59 compute-0 sudo[91007]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:59 compute-0 ceph-mgr[74715]: log_channel(audit) log [DBG] : from='client.14436 -' entity='client.admin' cmd=[{"prefix": "nfs cluster create", "cluster_id": "cephfs", "ingress": true, "virtual_ip": "192.168.122.2/24", "ingress_mode": "haproxy-protocol", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Dec 11 09:15:59 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true} v 0)
Dec 11 09:15:59 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Dec 11 09:15:59 compute-0 sudo[91032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Dec 11 09:15:59 compute-0 sudo[91032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:00 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 11 09:16:00 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:00 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 11 09:16:00 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:00 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Dec 11 09:16:00 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Dec 11 09:16:00 compute-0 ceph-mgr[74715]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to 127.9M
Dec 11 09:16:00 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to 127.9M
Dec 11 09:16:00 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec 11 09:16:00 compute-0 ceph-mgr[74715]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-1 to 134209126: error parsing value: Value '134209126' is below minimum 939524096
Dec 11 09:16:00 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-1 to 134209126: error parsing value: Value '134209126' is below minimum 939524096
Dec 11 09:16:00 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Dec 11 09:16:00 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Dec 11 09:16:00 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Dec 11 09:16:00 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Dec 11 09:16:00 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"} v 0)
Dec 11 09:16:00 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Dec 11 09:16:00 compute-0 ceph-mon[74426]: 5.1c scrub starts
Dec 11 09:16:00 compute-0 ceph-mon[74426]: 5.1c scrub ok
Dec 11 09:16:00 compute-0 ceph-mon[74426]: [11/Dec/2025:09:15:58] ENGINE Bus STARTING
Dec 11 09:16:00 compute-0 ceph-mon[74426]: [11/Dec/2025:09:15:58] ENGINE Serving on https://192.168.122.100:7150
Dec 11 09:16:00 compute-0 ceph-mon[74426]: [11/Dec/2025:09:15:58] ENGINE Client ('192.168.122.100', 45662) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 11 09:16:00 compute-0 ceph-mon[74426]: pgmap v5: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:16:00 compute-0 ceph-mon[74426]: from='client.14424 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 11 09:16:00 compute-0 ceph-mon[74426]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Dec 11 09:16:00 compute-0 ceph-mon[74426]: [11/Dec/2025:09:15:58] ENGINE Serving on http://192.168.122.100:8765
Dec 11 09:16:00 compute-0 ceph-mon[74426]: [11/Dec/2025:09:15:58] ENGINE Bus STARTED
Dec 11 09:16:00 compute-0 ceph-mon[74426]: 2.a scrub starts
Dec 11 09:16:00 compute-0 ceph-mon[74426]: 2.a scrub ok
Dec 11 09:16:00 compute-0 ceph-mon[74426]: mgrmap e27: compute-0.wwpcae(active, since 2s), standbys: compute-1.unesvp, compute-2.uiimcn
Dec 11 09:16:00 compute-0 ceph-mon[74426]: 2.4 scrub starts
Dec 11 09:16:00 compute-0 ceph-mon[74426]: 2.4 scrub ok
Dec 11 09:16:00 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Dec 11 09:16:00 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:00 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:00 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Dec 11 09:16:00 compute-0 sudo[91032]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:00 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 11 09:16:00 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:00 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 11 09:16:00 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 11 09:16:00 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:00 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 11 09:16:00 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:00 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Dec 11 09:16:00 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Dec 11 09:16:00 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:00 compute-0 ceph-mgr[74715]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 127.9M
Dec 11 09:16:00 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 127.9M
Dec 11 09:16:00 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Dec 11 09:16:00 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Dec 11 09:16:00 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec 11 09:16:00 compute-0 ceph-mgr[74715]: [cephadm INFO root] Adjusting osd_memory_target on compute-2 to 127.9M
Dec 11 09:16:00 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-2 to 127.9M
Dec 11 09:16:00 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec 11 09:16:00 compute-0 ceph-mgr[74715]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 134203392: error parsing value: Value '134203392' is below minimum 939524096
Dec 11 09:16:00 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 134203392: error parsing value: Value '134203392' is below minimum 939524096
Dec 11 09:16:00 compute-0 ceph-mgr[74715]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-2 to 134209126: error parsing value: Value '134209126' is below minimum 939524096
Dec 11 09:16:00 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-2 to 134209126: error parsing value: Value '134209126' is below minimum 939524096
Dec 11 09:16:00 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 11 09:16:00 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:16:00 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 11 09:16:00 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 11 09:16:00 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Dec 11 09:16:00 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Dec 11 09:16:00 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Dec 11 09:16:00 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Dec 11 09:16:00 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Dec 11 09:16:00 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Dec 11 09:16:00 compute-0 sudo[91078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 11 09:16:00 compute-0 sudo[91078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:00 compute-0 sudo[91078]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:00 compute-0 sudo[91103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph
Dec 11 09:16:00 compute-0 sudo[91103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:00 compute-0 sudo[91103]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:00 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 2.e scrub starts
Dec 11 09:16:00 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 45 pg[8.0( empty local-lis/les=0/0 n=0 ec=45/45 lis/c=0/0 les/c/f=0/0/0 sis=45) [1] r=0 lpr=45 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:16:00 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 2.e scrub ok
Dec 11 09:16:00 compute-0 sudo[91128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.conf.new
Dec 11 09:16:00 compute-0 sudo[91128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:00 compute-0 sudo[91128]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:00 compute-0 sudo[91153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:16:00 compute-0 sudo[91153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:00 compute-0 sudo[91153]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:00 compute-0 sudo[91178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.conf.new
Dec 11 09:16:00 compute-0 sudo[91178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:00 compute-0 sudo[91178]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:00 compute-0 sudo[91226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.conf.new
Dec 11 09:16:00 compute-0 sudo[91226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:00 compute-0 sudo[91226]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:00 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v7: 194 pgs: 1 unknown, 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:16:00 compute-0 sudo[91251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.conf.new
Dec 11 09:16:00 compute-0 sudo[91251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:00 compute-0 sudo[91251]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:00 compute-0 sudo[91276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 11 09:16:00 compute-0 sudo[91276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:00 compute-0 sudo[91276]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:00 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf
Dec 11 09:16:00 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf
Dec 11 09:16:00 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf
Dec 11 09:16:00 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf
Dec 11 09:16:00 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf
Dec 11 09:16:00 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf
Dec 11 09:16:01 compute-0 sudo[91301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config
Dec 11 09:16:01 compute-0 sudo[91301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:01 compute-0 sudo[91301]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:01 compute-0 sudo[91326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config
Dec 11 09:16:01 compute-0 sudo[91326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:01 compute-0 sudo[91326]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:01 compute-0 sudo[91351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf.new
Dec 11 09:16:01 compute-0 sudo[91351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:01 compute-0 sudo[91351]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:01 compute-0 sudo[91376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:16:01 compute-0 sudo[91376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:01 compute-0 sudo[91376]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:01 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Dec 11 09:16:01 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Dec 11 09:16:01 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Dec 11 09:16:01 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Dec 11 09:16:01 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 46 pg[8.0( empty local-lis/les=45/46 n=0 ec=45/45 lis/c=0/0 les/c/f=0/0/0 sis=45) [1] r=0 lpr=45 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:16:01 compute-0 ceph-mon[74426]: 5.18 scrub starts
Dec 11 09:16:01 compute-0 ceph-mon[74426]: 5.18 scrub ok
Dec 11 09:16:01 compute-0 ceph-mon[74426]: from='client.14436 -' entity='client.admin' cmd=[{"prefix": "nfs cluster create", "cluster_id": "cephfs", "ingress": true, "virtual_ip": "192.168.122.2/24", "ingress_mode": "haproxy-protocol", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Dec 11 09:16:01 compute-0 ceph-mon[74426]: Adjusting osd_memory_target on compute-1 to 127.9M
Dec 11 09:16:01 compute-0 ceph-mon[74426]: Unable to set osd_memory_target on compute-1 to 134209126: error parsing value: Value '134209126' is below minimum 939524096
Dec 11 09:16:01 compute-0 ceph-mon[74426]: 3.0 scrub starts
Dec 11 09:16:01 compute-0 ceph-mon[74426]: 3.0 scrub ok
Dec 11 09:16:01 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Dec 11 09:16:01 compute-0 ceph-mon[74426]: osdmap e45: 3 total, 3 up, 3 in
Dec 11 09:16:01 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Dec 11 09:16:01 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:01 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:01 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:01 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Dec 11 09:16:01 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:01 compute-0 ceph-mon[74426]: Adjusting osd_memory_target on compute-0 to 127.9M
Dec 11 09:16:01 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Dec 11 09:16:01 compute-0 ceph-mon[74426]: Adjusting osd_memory_target on compute-2 to 127.9M
Dec 11 09:16:01 compute-0 ceph-mon[74426]: Unable to set osd_memory_target on compute-0 to 134203392: error parsing value: Value '134203392' is below minimum 939524096
Dec 11 09:16:01 compute-0 ceph-mon[74426]: Unable to set osd_memory_target on compute-2 to 134209126: error parsing value: Value '134209126' is below minimum 939524096
Dec 11 09:16:01 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:16:01 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 11 09:16:01 compute-0 ceph-mon[74426]: Updating compute-0:/etc/ceph/ceph.conf
Dec 11 09:16:01 compute-0 ceph-mon[74426]: Updating compute-1:/etc/ceph/ceph.conf
Dec 11 09:16:01 compute-0 ceph-mon[74426]: Updating compute-2:/etc/ceph/ceph.conf
Dec 11 09:16:01 compute-0 ceph-mon[74426]: 2.e scrub starts
Dec 11 09:16:01 compute-0 ceph-mon[74426]: 2.e scrub ok
Dec 11 09:16:01 compute-0 ceph-mon[74426]: 4.1a scrub starts
Dec 11 09:16:01 compute-0 ceph-mon[74426]: 4.1a scrub ok
Dec 11 09:16:01 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Dec 11 09:16:01 compute-0 ceph-mon[74426]: osdmap e46: 3 total, 3 up, 3 in
Dec 11 09:16:01 compute-0 sudo[91401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf.new
Dec 11 09:16:01 compute-0 sudo[91401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:01 compute-0 sudo[91401]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:01 compute-0 ceph-mgr[74715]: [nfs INFO nfs.cluster] Created empty object:conf-nfs.cephfs
Dec 11 09:16:01 compute-0 ceph-mgr[74715]: [cephadm INFO root] Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Dec 11 09:16:01 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Dec 11 09:16:01 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 11 09:16:01 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:01 compute-0 ceph-mgr[74715]: [cephadm INFO root] Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Dec 11 09:16:01 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Dec 11 09:16:01 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec 11 09:16:01 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:01 compute-0 ceph-mon[74426]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 11 09:16:01 compute-0 systemd[1]: libpod-1e14c7cbc75d81a361e8814e2f40a1c6afa1bcc2a5c71b361124edf2995711ae.scope: Deactivated successfully.
Dec 11 09:16:01 compute-0 podman[90938]: 2025-12-11 09:16:01.368400118 +0000 UTC m=+1.932628991 container died 1e14c7cbc75d81a361e8814e2f40a1c6afa1bcc2a5c71b361124edf2995711ae (image=quay.io/ceph/ceph:v19, name=gracious_taussig, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:16:01 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : mgrmap e28: compute-0.wwpcae(active, since 4s), standbys: compute-1.unesvp, compute-2.uiimcn
Dec 11 09:16:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-002836bb8f0a990f771ef841ef957ed8a24a542abcc61d3f940a0d6d7e24e65b-merged.mount: Deactivated successfully.
Dec 11 09:16:01 compute-0 podman[90938]: 2025-12-11 09:16:01.443832009 +0000 UTC m=+2.008060842 container remove 1e14c7cbc75d81a361e8814e2f40a1c6afa1bcc2a5c71b361124edf2995711ae (image=quay.io/ceph/ceph:v19, name=gracious_taussig, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 11 09:16:01 compute-0 systemd[1]: libpod-conmon-1e14c7cbc75d81a361e8814e2f40a1c6afa1bcc2a5c71b361124edf2995711ae.scope: Deactivated successfully.
Dec 11 09:16:01 compute-0 sudo[90933]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:01 compute-0 sudo[91468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf.new
Dec 11 09:16:01 compute-0 sudo[91468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:01 compute-0 sudo[91468]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:01 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Dec 11 09:16:01 compute-0 sudo[91498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf.new
Dec 11 09:16:01 compute-0 sudo[91498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:01 compute-0 sudo[91498]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:01 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Dec 11 09:16:01 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec 11 09:16:01 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec 11 09:16:01 compute-0 sudo[91523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf.new /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf
Dec 11 09:16:01 compute-0 sudo[91523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:01 compute-0 sudo[91523]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:01 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 11 09:16:01 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 11 09:16:01 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec 11 09:16:01 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec 11 09:16:01 compute-0 sudo[91548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 11 09:16:01 compute-0 sudo[91548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:01 compute-0 sudo[91548]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:01 compute-0 sudo[91573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph
Dec 11 09:16:01 compute-0 sudo[91573]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:01 compute-0 sudo[91573]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:01 compute-0 sudo[91598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.client.admin.keyring.new
Dec 11 09:16:01 compute-0 sudo[91598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:01 compute-0 sudo[91598]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:01 compute-0 sudo[91623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:16:01 compute-0 sudo[91623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:01 compute-0 sudo[91623]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:01 compute-0 sudo[91648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.client.admin.keyring.new
Dec 11 09:16:01 compute-0 sudo[91648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:01 compute-0 sudo[91648]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:01 compute-0 sudo[91709]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytichxwqjoikvhupamxthnlxxhpyfdmp ; /usr/bin/python3'
Dec 11 09:16:01 compute-0 sudo[91709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:16:02 compute-0 sudo[91722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.client.admin.keyring.new
Dec 11 09:16:02 compute-0 sudo[91722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:02 compute-0 sudo[91722]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:02 compute-0 sudo[91747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.client.admin.keyring.new
Dec 11 09:16:02 compute-0 sudo[91747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:02 compute-0 sudo[91747]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:02 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring
Dec 11 09:16:02 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring
Dec 11 09:16:02 compute-0 python3[91721]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps --daemon-type rgw --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:16:02 compute-0 sudo[91772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Dec 11 09:16:02 compute-0 sudo[91772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:02 compute-0 sudo[91772]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:02 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring
Dec 11 09:16:02 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring
Dec 11 09:16:02 compute-0 podman[91777]: 2025-12-11 09:16:02.177640681 +0000 UTC m=+0.042919205 container create e41170e1d5b17491a6424e3031d349f545d2a5f4b287e7044b207679a2bf15e0 (image=quay.io/ceph/ceph:v19, name=practical_taussig, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:16:02 compute-0 systemd[1]: Started libpod-conmon-e41170e1d5b17491a6424e3031d349f545d2a5f4b287e7044b207679a2bf15e0.scope.
Dec 11 09:16:02 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:16:02 compute-0 sudo[91810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config
Dec 11 09:16:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f603da13ca89caed851c23b8163c41547a565ceeaf3b10a3330216859db62cbd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:16:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f603da13ca89caed851c23b8163c41547a565ceeaf3b10a3330216859db62cbd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:16:02 compute-0 sudo[91810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:02 compute-0 sudo[91810]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:02 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring
Dec 11 09:16:02 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring
Dec 11 09:16:02 compute-0 podman[91777]: 2025-12-11 09:16:02.25142523 +0000 UTC m=+0.116703774 container init e41170e1d5b17491a6424e3031d349f545d2a5f4b287e7044b207679a2bf15e0 (image=quay.io/ceph/ceph:v19, name=practical_taussig, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:16:02 compute-0 podman[91777]: 2025-12-11 09:16:02.160473483 +0000 UTC m=+0.025752027 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:16:02 compute-0 podman[91777]: 2025-12-11 09:16:02.258024987 +0000 UTC m=+0.123303511 container start e41170e1d5b17491a6424e3031d349f545d2a5f4b287e7044b207679a2bf15e0 (image=quay.io/ceph/ceph:v19, name=practical_taussig, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:16:02 compute-0 podman[91777]: 2025-12-11 09:16:02.261791465 +0000 UTC m=+0.127070009 container attach e41170e1d5b17491a6424e3031d349f545d2a5f4b287e7044b207679a2bf15e0 (image=quay.io/ceph/ceph:v19, name=practical_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec 11 09:16:02 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Dec 11 09:16:02 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Dec 11 09:16:02 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Dec 11 09:16:02 compute-0 sudo[91841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config
Dec 11 09:16:02 compute-0 sudo[91841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:02 compute-0 sudo[91841]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:02 compute-0 practical_taussig[91832]: ERROR: invalid flag --daemon-type
Dec 11 09:16:02 compute-0 systemd[1]: libpod-e41170e1d5b17491a6424e3031d349f545d2a5f4b287e7044b207679a2bf15e0.scope: Deactivated successfully.
Dec 11 09:16:02 compute-0 podman[91777]: 2025-12-11 09:16:02.318365616 +0000 UTC m=+0.183644150 container died e41170e1d5b17491a6424e3031d349f545d2a5f4b287e7044b207679a2bf15e0 (image=quay.io/ceph/ceph:v19, name=practical_taussig, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 11 09:16:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-f603da13ca89caed851c23b8163c41547a565ceeaf3b10a3330216859db62cbd-merged.mount: Deactivated successfully.
Dec 11 09:16:02 compute-0 ceph-mon[74426]: pgmap v7: 194 pgs: 1 unknown, 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:16:02 compute-0 ceph-mon[74426]: Updating compute-0:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf
Dec 11 09:16:02 compute-0 ceph-mon[74426]: Updating compute-1:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf
Dec 11 09:16:02 compute-0 ceph-mon[74426]: Updating compute-2:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf
Dec 11 09:16:02 compute-0 ceph-mon[74426]: 5.b deep-scrub starts
Dec 11 09:16:02 compute-0 ceph-mon[74426]: 5.b deep-scrub ok
Dec 11 09:16:02 compute-0 ceph-mon[74426]: Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Dec 11 09:16:02 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:02 compute-0 ceph-mon[74426]: Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Dec 11 09:16:02 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:02 compute-0 ceph-mon[74426]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 11 09:16:02 compute-0 ceph-mon[74426]: mgrmap e28: compute-0.wwpcae(active, since 4s), standbys: compute-1.unesvp, compute-2.uiimcn
Dec 11 09:16:02 compute-0 ceph-mon[74426]: 4.18 scrub starts
Dec 11 09:16:02 compute-0 ceph-mon[74426]: 7.18 scrub starts
Dec 11 09:16:02 compute-0 ceph-mon[74426]: 4.18 scrub ok
Dec 11 09:16:02 compute-0 ceph-mon[74426]: 7.18 scrub ok
Dec 11 09:16:02 compute-0 ceph-mon[74426]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec 11 09:16:02 compute-0 ceph-mon[74426]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 11 09:16:02 compute-0 ceph-mon[74426]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec 11 09:16:02 compute-0 ceph-mon[74426]: osdmap e47: 3 total, 3 up, 3 in
Dec 11 09:16:02 compute-0 podman[91777]: 2025-12-11 09:16:02.360843475 +0000 UTC m=+0.226121999 container remove e41170e1d5b17491a6424e3031d349f545d2a5f4b287e7044b207679a2bf15e0 (image=quay.io/ceph/ceph:v19, name=practical_taussig, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec 11 09:16:02 compute-0 systemd[1]: libpod-conmon-e41170e1d5b17491a6424e3031d349f545d2a5f4b287e7044b207679a2bf15e0.scope: Deactivated successfully.
Dec 11 09:16:02 compute-0 sudo[91882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring.new
Dec 11 09:16:02 compute-0 sudo[91882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:02 compute-0 sudo[91882]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:02 compute-0 sudo[91709]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:02 compute-0 sudo[91919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:16:02 compute-0 sudo[91919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:02 compute-0 sudo[91919]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:02 compute-0 sudo[91944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring.new
Dec 11 09:16:02 compute-0 sudo[91944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:02 compute-0 sudo[91944]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:02 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Dec 11 09:16:02 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Dec 11 09:16:02 compute-0 sudo[91992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring.new
Dec 11 09:16:02 compute-0 sudo[91992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:02 compute-0 sudo[91992]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:02 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 11 09:16:02 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:02 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 11 09:16:02 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:02 compute-0 sudo[92017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring.new
Dec 11 09:16:02 compute-0 sudo[92017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:02 compute-0 sudo[92017]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:02 compute-0 sudo[92042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring.new /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring
Dec 11 09:16:02 compute-0 sudo[92042]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:02 compute-0 sudo[92042]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:02 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 11 09:16:02 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:02 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 11 09:16:02 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:02 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v10: 194 pgs: 1 unknown, 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:16:02 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 11 09:16:02 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:02 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 11 09:16:03 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:03 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 11 09:16:03 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:03 compute-0 ceph-mgr[74715]: [progress INFO root] update: starting ev 3d9e253b-6bb3-4338-8f8d-f6ac5e647f80 (Updating node-exporter deployment (+3 -> 3))
Dec 11 09:16:03 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-0 on compute-0
Dec 11 09:16:03 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-0 on compute-0
Dec 11 09:16:03 compute-0 sudo[92067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:16:03 compute-0 sudo[92067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:03 compute-0 sudo[92067]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:03 compute-0 sudo[92092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/node-exporter:v1.7.0 --timeout 895 _orch deploy --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:16:03 compute-0 sudo[92092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:03 compute-0 ceph-mon[74426]: Updating compute-1:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring
Dec 11 09:16:03 compute-0 ceph-mon[74426]: Updating compute-0:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring
Dec 11 09:16:03 compute-0 ceph-mon[74426]: 2.c scrub starts
Dec 11 09:16:03 compute-0 ceph-mon[74426]: 2.c scrub ok
Dec 11 09:16:03 compute-0 ceph-mon[74426]: Updating compute-2:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring
Dec 11 09:16:03 compute-0 ceph-mon[74426]: 7.1b scrub starts
Dec 11 09:16:03 compute-0 ceph-mon[74426]: 7.1b scrub ok
Dec 11 09:16:03 compute-0 ceph-mon[74426]: 3.1c scrub starts
Dec 11 09:16:03 compute-0 ceph-mon[74426]: 3.1c scrub ok
Dec 11 09:16:03 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:03 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:03 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:03 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:03 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:03 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:03 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:03 compute-0 ceph-mon[74426]: 2.10 scrub starts
Dec 11 09:16:03 compute-0 ceph-mon[74426]: 2.10 scrub ok
Dec 11 09:16:03 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 7.1e deep-scrub starts
Dec 11 09:16:03 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 7.1e deep-scrub ok
Dec 11 09:16:03 compute-0 systemd[1]: Reloading.
Dec 11 09:16:03 compute-0 systemd-rc-local-generator[92184]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:16:03 compute-0 systemd-sysv-generator[92188]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:16:03 compute-0 ceph-mon[74426]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec 11 09:16:03 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : mgrmap e29: compute-0.wwpcae(active, since 6s), standbys: compute-1.unesvp, compute-2.uiimcn
Dec 11 09:16:03 compute-0 systemd[1]: Reloading.
Dec 11 09:16:03 compute-0 systemd-rc-local-generator[92225]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:16:03 compute-0 systemd-sysv-generator[92228]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:16:04 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:16:04 compute-0 systemd[1]: Starting Ceph node-exporter.compute-0 for 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060...
Dec 11 09:16:04 compute-0 ceph-mon[74426]: pgmap v10: 194 pgs: 1 unknown, 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:16:04 compute-0 ceph-mon[74426]: Deploying daemon node-exporter.compute-0 on compute-0
Dec 11 09:16:04 compute-0 ceph-mon[74426]: 7.1e deep-scrub starts
Dec 11 09:16:04 compute-0 ceph-mon[74426]: 7.1e deep-scrub ok
Dec 11 09:16:04 compute-0 ceph-mon[74426]: 4.1b scrub starts
Dec 11 09:16:04 compute-0 ceph-mon[74426]: 4.1b scrub ok
Dec 11 09:16:04 compute-0 ceph-mon[74426]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec 11 09:16:04 compute-0 ceph-mon[74426]: mgrmap e29: compute-0.wwpcae(active, since 6s), standbys: compute-1.unesvp, compute-2.uiimcn
Dec 11 09:16:04 compute-0 ceph-mon[74426]: 2.13 scrub starts
Dec 11 09:16:04 compute-0 ceph-mon[74426]: 2.13 scrub ok
Dec 11 09:16:04 compute-0 bash[92279]: Trying to pull quay.io/prometheus/node-exporter:v1.7.0...
Dec 11 09:16:04 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 7.6 deep-scrub starts
Dec 11 09:16:04 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 7.6 deep-scrub ok
Dec 11 09:16:04 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v11: 194 pgs: 194 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 12 op/s
Dec 11 09:16:05 compute-0 bash[92279]: Getting image source signatures
Dec 11 09:16:05 compute-0 bash[92279]: Copying blob sha256:324153f2810a9927fcce320af9e4e291e0b6e805cbdd1f338386c756b9defa24
Dec 11 09:16:05 compute-0 bash[92279]: Copying blob sha256:2abcce694348cd2c949c0e98a7400ebdfd8341021bcf6b541bc72033ce982510
Dec 11 09:16:05 compute-0 bash[92279]: Copying blob sha256:455fd88e5221bc1e278ef2d059cd70e4df99a24e5af050ede621534276f6cf9a
Dec 11 09:16:05 compute-0 ceph-mon[74426]: 7.6 deep-scrub starts
Dec 11 09:16:05 compute-0 ceph-mon[74426]: 7.6 deep-scrub ok
Dec 11 09:16:05 compute-0 ceph-mon[74426]: 7.1c scrub starts
Dec 11 09:16:05 compute-0 ceph-mon[74426]: 7.1c scrub ok
Dec 11 09:16:05 compute-0 ceph-mon[74426]: 5.13 scrub starts
Dec 11 09:16:05 compute-0 ceph-mon[74426]: 5.13 scrub ok
Dec 11 09:16:05 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Dec 11 09:16:05 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Dec 11 09:16:06 compute-0 bash[92279]: Copying config sha256:72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e
Dec 11 09:16:06 compute-0 bash[92279]: Writing manifest to image destination
Dec 11 09:16:06 compute-0 podman[92279]: 2025-12-11 09:16:06.270379271 +0000 UTC m=+1.916663691 container create 71c011c91ff6298c4d680d36893ef7d2b662adac0c087e7fdac0eacd23df3a9f (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 09:16:06 compute-0 podman[92279]: 2025-12-11 09:16:06.2358661 +0000 UTC m=+1.882150540 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0
Dec 11 09:16:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1c72d21617a272a88769d01aa48440e65a2c5c00f89ec0516264a0ae35b58e7/merged/etc/node-exporter supports timestamps until 2038 (0x7fffffff)
Dec 11 09:16:06 compute-0 podman[92279]: 2025-12-11 09:16:06.350741966 +0000 UTC m=+1.997026416 container init 71c011c91ff6298c4d680d36893ef7d2b662adac0c087e7fdac0eacd23df3a9f (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 09:16:06 compute-0 podman[92279]: 2025-12-11 09:16:06.356528738 +0000 UTC m=+2.002813158 container start 71c011c91ff6298c4d680d36893ef7d2b662adac0c087e7fdac0eacd23df3a9f (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 09:16:06 compute-0 bash[92279]: 71c011c91ff6298c4d680d36893ef7d2b662adac0c087e7fdac0eacd23df3a9f
Dec 11 09:16:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-0[92352]: ts=2025-12-11T09:16:06.364Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)"
Dec 11 09:16:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-0[92352]: ts=2025-12-11T09:16:06.364Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)"
Dec 11 09:16:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-0[92352]: ts=2025-12-11T09:16:06.365Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Dec 11 09:16:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-0[92352]: ts=2025-12-11T09:16:06.365Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Dec 11 09:16:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-0[92352]: ts=2025-12-11T09:16:06.366Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Dec 11 09:16:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-0[92352]: ts=2025-12-11T09:16:06.366Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Dec 11 09:16:06 compute-0 systemd[1]: Started Ceph node-exporter.compute-0 for 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060.
Dec 11 09:16:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-0[92352]: ts=2025-12-11T09:16:06.367Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Dec 11 09:16:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-0[92352]: ts=2025-12-11T09:16:06.367Z caller=node_exporter.go:117 level=info collector=arp
Dec 11 09:16:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-0[92352]: ts=2025-12-11T09:16:06.367Z caller=node_exporter.go:117 level=info collector=bcache
Dec 11 09:16:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-0[92352]: ts=2025-12-11T09:16:06.367Z caller=node_exporter.go:117 level=info collector=bonding
Dec 11 09:16:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-0[92352]: ts=2025-12-11T09:16:06.367Z caller=node_exporter.go:117 level=info collector=btrfs
Dec 11 09:16:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-0[92352]: ts=2025-12-11T09:16:06.367Z caller=node_exporter.go:117 level=info collector=conntrack
Dec 11 09:16:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-0[92352]: ts=2025-12-11T09:16:06.367Z caller=node_exporter.go:117 level=info collector=cpu
Dec 11 09:16:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-0[92352]: ts=2025-12-11T09:16:06.367Z caller=node_exporter.go:117 level=info collector=cpufreq
Dec 11 09:16:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-0[92352]: ts=2025-12-11T09:16:06.367Z caller=node_exporter.go:117 level=info collector=diskstats
Dec 11 09:16:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-0[92352]: ts=2025-12-11T09:16:06.367Z caller=node_exporter.go:117 level=info collector=dmi
Dec 11 09:16:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-0[92352]: ts=2025-12-11T09:16:06.367Z caller=node_exporter.go:117 level=info collector=edac
Dec 11 09:16:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-0[92352]: ts=2025-12-11T09:16:06.367Z caller=node_exporter.go:117 level=info collector=entropy
Dec 11 09:16:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-0[92352]: ts=2025-12-11T09:16:06.367Z caller=node_exporter.go:117 level=info collector=fibrechannel
Dec 11 09:16:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-0[92352]: ts=2025-12-11T09:16:06.367Z caller=node_exporter.go:117 level=info collector=filefd
Dec 11 09:16:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-0[92352]: ts=2025-12-11T09:16:06.367Z caller=node_exporter.go:117 level=info collector=filesystem
Dec 11 09:16:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-0[92352]: ts=2025-12-11T09:16:06.367Z caller=node_exporter.go:117 level=info collector=hwmon
Dec 11 09:16:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-0[92352]: ts=2025-12-11T09:16:06.367Z caller=node_exporter.go:117 level=info collector=infiniband
Dec 11 09:16:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-0[92352]: ts=2025-12-11T09:16:06.367Z caller=node_exporter.go:117 level=info collector=ipvs
Dec 11 09:16:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-0[92352]: ts=2025-12-11T09:16:06.367Z caller=node_exporter.go:117 level=info collector=loadavg
Dec 11 09:16:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-0[92352]: ts=2025-12-11T09:16:06.367Z caller=node_exporter.go:117 level=info collector=mdadm
Dec 11 09:16:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-0[92352]: ts=2025-12-11T09:16:06.367Z caller=node_exporter.go:117 level=info collector=meminfo
Dec 11 09:16:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-0[92352]: ts=2025-12-11T09:16:06.367Z caller=node_exporter.go:117 level=info collector=netclass
Dec 11 09:16:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-0[92352]: ts=2025-12-11T09:16:06.367Z caller=node_exporter.go:117 level=info collector=netdev
Dec 11 09:16:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-0[92352]: ts=2025-12-11T09:16:06.367Z caller=node_exporter.go:117 level=info collector=netstat
Dec 11 09:16:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-0[92352]: ts=2025-12-11T09:16:06.367Z caller=node_exporter.go:117 level=info collector=nfs
Dec 11 09:16:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-0[92352]: ts=2025-12-11T09:16:06.367Z caller=node_exporter.go:117 level=info collector=nfsd
Dec 11 09:16:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-0[92352]: ts=2025-12-11T09:16:06.367Z caller=node_exporter.go:117 level=info collector=nvme
Dec 11 09:16:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-0[92352]: ts=2025-12-11T09:16:06.367Z caller=node_exporter.go:117 level=info collector=os
Dec 11 09:16:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-0[92352]: ts=2025-12-11T09:16:06.367Z caller=node_exporter.go:117 level=info collector=powersupplyclass
Dec 11 09:16:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-0[92352]: ts=2025-12-11T09:16:06.367Z caller=node_exporter.go:117 level=info collector=pressure
Dec 11 09:16:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-0[92352]: ts=2025-12-11T09:16:06.367Z caller=node_exporter.go:117 level=info collector=rapl
Dec 11 09:16:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-0[92352]: ts=2025-12-11T09:16:06.367Z caller=node_exporter.go:117 level=info collector=schedstat
Dec 11 09:16:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-0[92352]: ts=2025-12-11T09:16:06.367Z caller=node_exporter.go:117 level=info collector=selinux
Dec 11 09:16:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-0[92352]: ts=2025-12-11T09:16:06.367Z caller=node_exporter.go:117 level=info collector=sockstat
Dec 11 09:16:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-0[92352]: ts=2025-12-11T09:16:06.367Z caller=node_exporter.go:117 level=info collector=softnet
Dec 11 09:16:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-0[92352]: ts=2025-12-11T09:16:06.367Z caller=node_exporter.go:117 level=info collector=stat
Dec 11 09:16:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-0[92352]: ts=2025-12-11T09:16:06.367Z caller=node_exporter.go:117 level=info collector=tapestats
Dec 11 09:16:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-0[92352]: ts=2025-12-11T09:16:06.367Z caller=node_exporter.go:117 level=info collector=textfile
Dec 11 09:16:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-0[92352]: ts=2025-12-11T09:16:06.367Z caller=node_exporter.go:117 level=info collector=thermal_zone
Dec 11 09:16:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-0[92352]: ts=2025-12-11T09:16:06.367Z caller=node_exporter.go:117 level=info collector=time
Dec 11 09:16:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-0[92352]: ts=2025-12-11T09:16:06.367Z caller=node_exporter.go:117 level=info collector=udp_queues
Dec 11 09:16:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-0[92352]: ts=2025-12-11T09:16:06.367Z caller=node_exporter.go:117 level=info collector=uname
Dec 11 09:16:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-0[92352]: ts=2025-12-11T09:16:06.367Z caller=node_exporter.go:117 level=info collector=vmstat
Dec 11 09:16:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-0[92352]: ts=2025-12-11T09:16:06.367Z caller=node_exporter.go:117 level=info collector=xfs
Dec 11 09:16:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-0[92352]: ts=2025-12-11T09:16:06.367Z caller=node_exporter.go:117 level=info collector=zfs
Dec 11 09:16:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-0[92352]: ts=2025-12-11T09:16:06.369Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100
Dec 11 09:16:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-0[92352]: ts=2025-12-11T09:16:06.369Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100
Dec 11 09:16:06 compute-0 ceph-mon[74426]: pgmap v11: 194 pgs: 194 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 12 op/s
Dec 11 09:16:06 compute-0 ceph-mon[74426]: 7.3 scrub starts
Dec 11 09:16:06 compute-0 ceph-mon[74426]: 7.3 scrub ok
Dec 11 09:16:06 compute-0 ceph-mon[74426]: 7.17 scrub starts
Dec 11 09:16:06 compute-0 ceph-mon[74426]: 7.17 scrub ok
Dec 11 09:16:06 compute-0 ceph-mon[74426]: 5.8 scrub starts
Dec 11 09:16:06 compute-0 ceph-mon[74426]: 5.8 scrub ok
Dec 11 09:16:06 compute-0 sudo[92092]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:06 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 11 09:16:06 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:06 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 11 09:16:06 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Dec 11 09:16:06 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:06 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Dec 11 09:16:06 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Dec 11 09:16:06 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:06 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-1 on compute-1
Dec 11 09:16:06 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-1 on compute-1
Dec 11 09:16:06 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v12: 194 pgs: 194 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 0 B/s wr, 11 op/s
Dec 11 09:16:07 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Dec 11 09:16:07 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Dec 11 09:16:07 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:07 compute-0 ceph-mon[74426]: 7.2 scrub starts
Dec 11 09:16:07 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:07 compute-0 ceph-mon[74426]: 7.2 scrub ok
Dec 11 09:16:07 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:07 compute-0 ceph-mon[74426]: Deploying daemon node-exporter.compute-1 on compute-1
Dec 11 09:16:07 compute-0 ceph-mon[74426]: 7.12 scrub starts
Dec 11 09:16:07 compute-0 ceph-mon[74426]: 7.12 scrub ok
Dec 11 09:16:08 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 7.e scrub starts
Dec 11 09:16:08 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 7.e scrub ok
Dec 11 09:16:08 compute-0 ceph-mon[74426]: pgmap v12: 194 pgs: 194 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 0 B/s wr, 11 op/s
Dec 11 09:16:08 compute-0 ceph-mon[74426]: 4.14 scrub starts
Dec 11 09:16:08 compute-0 ceph-mon[74426]: 4.14 scrub ok
Dec 11 09:16:08 compute-0 ceph-mon[74426]: 7.4 scrub starts
Dec 11 09:16:08 compute-0 ceph-mon[74426]: 7.4 scrub ok
Dec 11 09:16:08 compute-0 ceph-mon[74426]: 7.15 scrub starts
Dec 11 09:16:08 compute-0 ceph-mon[74426]: 7.15 scrub ok
Dec 11 09:16:08 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 11 09:16:08 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:08 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 11 09:16:08 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:08 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Dec 11 09:16:08 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:08 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-2 on compute-2
Dec 11 09:16:08 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-2 on compute-2
Dec 11 09:16:08 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v13: 194 pgs: 194 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 9 op/s
Dec 11 09:16:09 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:16:09 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 7.f scrub starts
Dec 11 09:16:09 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 7.f scrub ok
Dec 11 09:16:09 compute-0 ceph-mon[74426]: 2.1b scrub starts
Dec 11 09:16:09 compute-0 ceph-mon[74426]: 2.1b scrub ok
Dec 11 09:16:09 compute-0 ceph-mon[74426]: 7.e scrub starts
Dec 11 09:16:09 compute-0 ceph-mon[74426]: 7.e scrub ok
Dec 11 09:16:09 compute-0 ceph-mon[74426]: 7.0 scrub starts
Dec 11 09:16:09 compute-0 ceph-mon[74426]: 7.0 scrub ok
Dec 11 09:16:09 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:09 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:09 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:09 compute-0 ceph-mon[74426]: Deploying daemon node-exporter.compute-2 on compute-2
Dec 11 09:16:09 compute-0 ceph-mon[74426]: pgmap v13: 194 pgs: 194 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 9 op/s
Dec 11 09:16:10 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Dec 11 09:16:10 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Dec 11 09:16:10 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v14: 194 pgs: 194 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 7 op/s
Dec 11 09:16:11 compute-0 ceph-mon[74426]: 5.d scrub starts
Dec 11 09:16:11 compute-0 ceph-mon[74426]: 7.f scrub starts
Dec 11 09:16:11 compute-0 ceph-mon[74426]: 5.d scrub ok
Dec 11 09:16:11 compute-0 ceph-mon[74426]: 7.f scrub ok
Dec 11 09:16:11 compute-0 ceph-mon[74426]: 7.1 deep-scrub starts
Dec 11 09:16:11 compute-0 ceph-mon[74426]: 7.1 deep-scrub ok
Dec 11 09:16:11 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Dec 11 09:16:11 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Dec 11 09:16:12 compute-0 ceph-mon[74426]: 7.a scrub starts
Dec 11 09:16:12 compute-0 ceph-mon[74426]: 7.a scrub ok
Dec 11 09:16:12 compute-0 ceph-mon[74426]: 7.8 scrub starts
Dec 11 09:16:12 compute-0 ceph-mon[74426]: 7.8 scrub ok
Dec 11 09:16:12 compute-0 ceph-mon[74426]: 7.7 scrub starts
Dec 11 09:16:12 compute-0 ceph-mon[74426]: 7.7 scrub ok
Dec 11 09:16:12 compute-0 ceph-mon[74426]: pgmap v14: 194 pgs: 194 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 7 op/s
Dec 11 09:16:12 compute-0 ceph-mon[74426]: 7.1d deep-scrub starts
Dec 11 09:16:12 compute-0 ceph-mon[74426]: 7.1d deep-scrub ok
Dec 11 09:16:12 compute-0 ceph-mon[74426]: 7.9 scrub starts
Dec 11 09:16:12 compute-0 ceph-mon[74426]: 7.9 scrub ok
Dec 11 09:16:12 compute-0 ceph-mon[74426]: 6.3 scrub starts
Dec 11 09:16:12 compute-0 ceph-mon[74426]: 6.3 scrub ok
Dec 11 09:16:12 compute-0 sudo[92384]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtdngyjjbocmbsbicwtbilqrkiuvdksz ; /usr/bin/python3'
Dec 11 09:16:12 compute-0 sudo[92384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:16:12 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 7.b scrub starts
Dec 11 09:16:12 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 7.b scrub ok
Dec 11 09:16:12 compute-0 python3[92386]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps --daemon-type rgw --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:16:12 compute-0 podman[92387]: 2025-12-11 09:16:12.710093233 +0000 UTC m=+0.050667087 container create dba955321f3d8cf47a9ed48b3999f3c255f5a816b7a559f4655a860f9ce91bed (image=quay.io/ceph/ceph:v19, name=wizardly_hofstadter, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:16:12 compute-0 systemd[1]: Started libpod-conmon-dba955321f3d8cf47a9ed48b3999f3c255f5a816b7a559f4655a860f9ce91bed.scope.
Dec 11 09:16:12 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:16:12 compute-0 podman[92387]: 2025-12-11 09:16:12.689262721 +0000 UTC m=+0.029836585 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:16:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e6541883c492ebb9fdda47358fe6bb10068f905f9c3930758f94f163d84ee15/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:16:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e6541883c492ebb9fdda47358fe6bb10068f905f9c3930758f94f163d84ee15/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:16:12 compute-0 podman[92387]: 2025-12-11 09:16:12.805236882 +0000 UTC m=+0.145810756 container init dba955321f3d8cf47a9ed48b3999f3c255f5a816b7a559f4655a860f9ce91bed (image=quay.io/ceph/ceph:v19, name=wizardly_hofstadter, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:16:12 compute-0 podman[92387]: 2025-12-11 09:16:12.814947155 +0000 UTC m=+0.155521009 container start dba955321f3d8cf47a9ed48b3999f3c255f5a816b7a559f4655a860f9ce91bed (image=quay.io/ceph/ceph:v19, name=wizardly_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 11 09:16:12 compute-0 podman[92387]: 2025-12-11 09:16:12.81922932 +0000 UTC m=+0.159803194 container attach dba955321f3d8cf47a9ed48b3999f3c255f5a816b7a559f4655a860f9ce91bed (image=quay.io/ceph/ceph:v19, name=wizardly_hofstadter, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec 11 09:16:12 compute-0 wizardly_hofstadter[92403]: ERROR: invalid flag --daemon-type
Dec 11 09:16:12 compute-0 systemd[1]: libpod-dba955321f3d8cf47a9ed48b3999f3c255f5a816b7a559f4655a860f9ce91bed.scope: Deactivated successfully.
Dec 11 09:16:12 compute-0 podman[92387]: 2025-12-11 09:16:12.875113999 +0000 UTC m=+0.215687853 container died dba955321f3d8cf47a9ed48b3999f3c255f5a816b7a559f4655a860f9ce91bed (image=quay.io/ceph/ceph:v19, name=wizardly_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec 11 09:16:12 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v15: 194 pgs: 194 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 6 op/s
Dec 11 09:16:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e6541883c492ebb9fdda47358fe6bb10068f905f9c3930758f94f163d84ee15-merged.mount: Deactivated successfully.
Dec 11 09:16:12 compute-0 podman[92387]: 2025-12-11 09:16:12.922013677 +0000 UTC m=+0.262587521 container remove dba955321f3d8cf47a9ed48b3999f3c255f5a816b7a559f4655a860f9ce91bed (image=quay.io/ceph/ceph:v19, name=wizardly_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec 11 09:16:12 compute-0 systemd[1]: libpod-conmon-dba955321f3d8cf47a9ed48b3999f3c255f5a816b7a559f4655a860f9ce91bed.scope: Deactivated successfully.
Dec 11 09:16:12 compute-0 sudo[92384]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:13 compute-0 ceph-mon[74426]: 7.b scrub starts
Dec 11 09:16:13 compute-0 ceph-mon[74426]: 7.b scrub ok
Dec 11 09:16:13 compute-0 ceph-mon[74426]: 7.d scrub starts
Dec 11 09:16:13 compute-0 ceph-mon[74426]: 7.d scrub ok
Dec 11 09:16:13 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Dec 11 09:16:13 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Dec 11 09:16:14 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:16:14 compute-0 ceph-mon[74426]: pgmap v15: 194 pgs: 194 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 6 op/s
Dec 11 09:16:14 compute-0 ceph-mon[74426]: 7.10 scrub starts
Dec 11 09:16:14 compute-0 ceph-mon[74426]: 7.10 scrub ok
Dec 11 09:16:14 compute-0 ceph-mon[74426]: 7.c scrub starts
Dec 11 09:16:14 compute-0 ceph-mon[74426]: 7.c scrub ok
Dec 11 09:16:14 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Dec 11 09:16:14 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Dec 11 09:16:14 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v16: 194 pgs: 194 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s
Dec 11 09:16:15 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 11 09:16:15 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:15 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 11 09:16:15 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:15 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Dec 11 09:16:15 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:15 compute-0 ceph-mgr[74715]: [progress INFO root] complete: finished ev 3d9e253b-6bb3-4338-8f8d-f6ac5e647f80 (Updating node-exporter deployment (+3 -> 3))
Dec 11 09:16:15 compute-0 ceph-mgr[74715]: [progress INFO root] Completed event 3d9e253b-6bb3-4338-8f8d-f6ac5e647f80 (Updating node-exporter deployment (+3 -> 3)) in 12 seconds
Dec 11 09:16:15 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Dec 11 09:16:15 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:15 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 11 09:16:15 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 11 09:16:15 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 11 09:16:15 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 11 09:16:15 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 11 09:16:15 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:16:15 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 11 09:16:15 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 11 09:16:15 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 11 09:16:15 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:16:15 compute-0 ceph-mon[74426]: 7.13 scrub starts
Dec 11 09:16:15 compute-0 ceph-mon[74426]: 7.13 scrub ok
Dec 11 09:16:15 compute-0 ceph-mon[74426]: 7.19 scrub starts
Dec 11 09:16:15 compute-0 ceph-mon[74426]: 7.19 scrub ok
Dec 11 09:16:15 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:15 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:15 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:15 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:15 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 11 09:16:15 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 11 09:16:15 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:16:15 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 11 09:16:15 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:16:15 compute-0 sudo[92433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:16:15 compute-0 sudo[92433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:15 compute-0 sudo[92433]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:15 compute-0 sudo[92458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 11 09:16:15 compute-0 sudo[92458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:16 compute-0 podman[92523]: 2025-12-11 09:16:16.046109404 +0000 UTC m=+0.044634908 container create 990b633eb638e9252e7f428309cfed1c1546b22dd459a0b974b5e8c77f635951 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_lamarr, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:16:16 compute-0 systemd[1]: Started libpod-conmon-990b633eb638e9252e7f428309cfed1c1546b22dd459a0b974b5e8c77f635951.scope.
Dec 11 09:16:16 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:16:16 compute-0 podman[92523]: 2025-12-11 09:16:16.027045287 +0000 UTC m=+0.025570811 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:16:16 compute-0 podman[92523]: 2025-12-11 09:16:16.124857269 +0000 UTC m=+0.123382793 container init 990b633eb638e9252e7f428309cfed1c1546b22dd459a0b974b5e8c77f635951 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec 11 09:16:16 compute-0 podman[92523]: 2025-12-11 09:16:16.131411465 +0000 UTC m=+0.129936969 container start 990b633eb638e9252e7f428309cfed1c1546b22dd459a0b974b5e8c77f635951 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_lamarr, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec 11 09:16:16 compute-0 podman[92523]: 2025-12-11 09:16:16.135275025 +0000 UTC m=+0.133800559 container attach 990b633eb638e9252e7f428309cfed1c1546b22dd459a0b974b5e8c77f635951 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:16:16 compute-0 boring_lamarr[92539]: 167 167
Dec 11 09:16:16 compute-0 systemd[1]: libpod-990b633eb638e9252e7f428309cfed1c1546b22dd459a0b974b5e8c77f635951.scope: Deactivated successfully.
Dec 11 09:16:16 compute-0 podman[92523]: 2025-12-11 09:16:16.137392601 +0000 UTC m=+0.135918115 container died 990b633eb638e9252e7f428309cfed1c1546b22dd459a0b974b5e8c77f635951 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_lamarr, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 11 09:16:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-f1b3249572481f79956a661ba5a42c823ae52efc97cb75dff472b32a4d17ed44-merged.mount: Deactivated successfully.
Dec 11 09:16:16 compute-0 podman[92523]: 2025-12-11 09:16:16.178356864 +0000 UTC m=+0.176882368 container remove 990b633eb638e9252e7f428309cfed1c1546b22dd459a0b974b5e8c77f635951 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec 11 09:16:16 compute-0 systemd[1]: libpod-conmon-990b633eb638e9252e7f428309cfed1c1546b22dd459a0b974b5e8c77f635951.scope: Deactivated successfully.
Dec 11 09:16:16 compute-0 podman[92563]: 2025-12-11 09:16:16.353490466 +0000 UTC m=+0.051837563 container create b5782466b26f9846b2e8acb3c21691aeddf1e0a524502e4f10ae9fbf2072d286 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_sammet, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:16:16 compute-0 systemd[1]: Started libpod-conmon-b5782466b26f9846b2e8acb3c21691aeddf1e0a524502e4f10ae9fbf2072d286.scope.
Dec 11 09:16:16 compute-0 podman[92563]: 2025-12-11 09:16:16.327520284 +0000 UTC m=+0.025867421 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:16:16 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:16:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3501d0b548e56fdc530274384162b8eb3e9c68bf5b338794710638e4c4f8f1a4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 11 09:16:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3501d0b548e56fdc530274384162b8eb3e9c68bf5b338794710638e4c4f8f1a4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:16:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3501d0b548e56fdc530274384162b8eb3e9c68bf5b338794710638e4c4f8f1a4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:16:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3501d0b548e56fdc530274384162b8eb3e9c68bf5b338794710638e4c4f8f1a4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 11 09:16:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3501d0b548e56fdc530274384162b8eb3e9c68bf5b338794710638e4c4f8f1a4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 11 09:16:16 compute-0 podman[92563]: 2025-12-11 09:16:16.451133753 +0000 UTC m=+0.149480890 container init b5782466b26f9846b2e8acb3c21691aeddf1e0a524502e4f10ae9fbf2072d286 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:16:16 compute-0 podman[92563]: 2025-12-11 09:16:16.459775193 +0000 UTC m=+0.158122300 container start b5782466b26f9846b2e8acb3c21691aeddf1e0a524502e4f10ae9fbf2072d286 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec 11 09:16:16 compute-0 podman[92563]: 2025-12-11 09:16:16.463307564 +0000 UTC m=+0.161654681 container attach b5782466b26f9846b2e8acb3c21691aeddf1e0a524502e4f10ae9fbf2072d286 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:16:16 compute-0 ceph-mon[74426]: pgmap v16: 194 pgs: 194 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s
Dec 11 09:16:16 compute-0 ceph-mon[74426]: 6.1a scrub starts
Dec 11 09:16:16 compute-0 ceph-mon[74426]: 6.1a scrub ok
Dec 11 09:16:16 compute-0 wonderful_sammet[92579]: --> passed data devices: 0 physical, 1 LVM
Dec 11 09:16:16 compute-0 wonderful_sammet[92579]: --> All data devices are unavailable
Dec 11 09:16:16 compute-0 systemd[1]: libpod-b5782466b26f9846b2e8acb3c21691aeddf1e0a524502e4f10ae9fbf2072d286.scope: Deactivated successfully.
Dec 11 09:16:16 compute-0 podman[92563]: 2025-12-11 09:16:16.843496896 +0000 UTC m=+0.541844003 container died b5782466b26f9846b2e8acb3c21691aeddf1e0a524502e4f10ae9fbf2072d286 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_sammet, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:16:16 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v17: 194 pgs: 194 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:16:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-3501d0b548e56fdc530274384162b8eb3e9c68bf5b338794710638e4c4f8f1a4-merged.mount: Deactivated successfully.
Dec 11 09:16:16 compute-0 ceph-mgr[74715]: [progress INFO root] Writing back 14 completed events
Dec 11 09:16:16 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 11 09:16:17 compute-0 podman[92563]: 2025-12-11 09:16:17.011713661 +0000 UTC m=+0.710060768 container remove b5782466b26f9846b2e8acb3c21691aeddf1e0a524502e4f10ae9fbf2072d286 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_sammet, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default)
Dec 11 09:16:17 compute-0 systemd[1]: libpod-conmon-b5782466b26f9846b2e8acb3c21691aeddf1e0a524502e4f10ae9fbf2072d286.scope: Deactivated successfully.
Dec 11 09:16:17 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:17 compute-0 sudo[92458]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:17 compute-0 sudo[92607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:16:17 compute-0 sudo[92607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:17 compute-0 sudo[92607]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:17 compute-0 sudo[92632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -- lvm list --format json
Dec 11 09:16:17 compute-0 sudo[92632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:17 compute-0 podman[92698]: 2025-12-11 09:16:17.648475605 +0000 UTC m=+0.045376802 container create 83cec47f44db11a3fa8738ca816a3472aff984428140f41409abaadec12841d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:16:17 compute-0 systemd[1]: Started libpod-conmon-83cec47f44db11a3fa8738ca816a3472aff984428140f41409abaadec12841d9.scope.
Dec 11 09:16:17 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:16:17 compute-0 podman[92698]: 2025-12-11 09:16:17.630612506 +0000 UTC m=+0.027513723 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:16:17 compute-0 podman[92698]: 2025-12-11 09:16:17.735762447 +0000 UTC m=+0.132663664 container init 83cec47f44db11a3fa8738ca816a3472aff984428140f41409abaadec12841d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_lamport, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Dec 11 09:16:17 compute-0 podman[92698]: 2025-12-11 09:16:17.743440818 +0000 UTC m=+0.140342015 container start 83cec47f44db11a3fa8738ca816a3472aff984428140f41409abaadec12841d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_lamport, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 11 09:16:17 compute-0 quizzical_lamport[92714]: 167 167
Dec 11 09:16:17 compute-0 podman[92698]: 2025-12-11 09:16:17.747071231 +0000 UTC m=+0.143972438 container attach 83cec47f44db11a3fa8738ca816a3472aff984428140f41409abaadec12841d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_lamport, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec 11 09:16:17 compute-0 systemd[1]: libpod-83cec47f44db11a3fa8738ca816a3472aff984428140f41409abaadec12841d9.scope: Deactivated successfully.
Dec 11 09:16:17 compute-0 podman[92698]: 2025-12-11 09:16:17.750146777 +0000 UTC m=+0.147047974 container died 83cec47f44db11a3fa8738ca816a3472aff984428140f41409abaadec12841d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:16:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-e3586d77119029626a847122a1fcc803fc10ac1142fa61efe41c2e2c9a09dba4-merged.mount: Deactivated successfully.
Dec 11 09:16:17 compute-0 podman[92698]: 2025-12-11 09:16:17.790450939 +0000 UTC m=+0.187352136 container remove 83cec47f44db11a3fa8738ca816a3472aff984428140f41409abaadec12841d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec 11 09:16:17 compute-0 systemd[1]: libpod-conmon-83cec47f44db11a3fa8738ca816a3472aff984428140f41409abaadec12841d9.scope: Deactivated successfully.
Dec 11 09:16:17 compute-0 podman[92737]: 2025-12-11 09:16:17.964330603 +0000 UTC m=+0.047565740 container create ef8771cdaa000be12a26f2e86fe1aa04fe7e9fa78a23496a53c5b334d762436f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_fermat, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:16:18 compute-0 systemd[1]: Started libpod-conmon-ef8771cdaa000be12a26f2e86fe1aa04fe7e9fa78a23496a53c5b334d762436f.scope.
Dec 11 09:16:18 compute-0 ceph-mon[74426]: 7.1a scrub starts
Dec 11 09:16:18 compute-0 ceph-mon[74426]: 7.1a scrub ok
Dec 11 09:16:18 compute-0 ceph-mon[74426]: pgmap v17: 194 pgs: 194 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:16:18 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:18 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:16:18 compute-0 podman[92737]: 2025-12-11 09:16:17.943578933 +0000 UTC m=+0.026814090 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:16:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e59be6dc30a4633689ac2ad7b040bcb95add66ad5e86628bff82bddeaf21bbbb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 11 09:16:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e59be6dc30a4633689ac2ad7b040bcb95add66ad5e86628bff82bddeaf21bbbb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:16:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e59be6dc30a4633689ac2ad7b040bcb95add66ad5e86628bff82bddeaf21bbbb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:16:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e59be6dc30a4633689ac2ad7b040bcb95add66ad5e86628bff82bddeaf21bbbb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 11 09:16:18 compute-0 podman[92737]: 2025-12-11 09:16:18.055584159 +0000 UTC m=+0.138819316 container init ef8771cdaa000be12a26f2e86fe1aa04fe7e9fa78a23496a53c5b334d762436f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:16:18 compute-0 podman[92737]: 2025-12-11 09:16:18.070707513 +0000 UTC m=+0.153942650 container start ef8771cdaa000be12a26f2e86fe1aa04fe7e9fa78a23496a53c5b334d762436f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_fermat, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:16:18 compute-0 podman[92737]: 2025-12-11 09:16:18.07448672 +0000 UTC m=+0.157721857 container attach ef8771cdaa000be12a26f2e86fe1aa04fe7e9fa78a23496a53c5b334d762436f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_fermat, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:16:18 compute-0 wonderful_fermat[92751]: {
Dec 11 09:16:18 compute-0 wonderful_fermat[92751]:     "1": [
Dec 11 09:16:18 compute-0 wonderful_fermat[92751]:         {
Dec 11 09:16:18 compute-0 wonderful_fermat[92751]:             "devices": [
Dec 11 09:16:18 compute-0 wonderful_fermat[92751]:                 "/dev/loop3"
Dec 11 09:16:18 compute-0 wonderful_fermat[92751]:             ],
Dec 11 09:16:18 compute-0 wonderful_fermat[92751]:             "lv_name": "ceph_lv0",
Dec 11 09:16:18 compute-0 wonderful_fermat[92751]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 11 09:16:18 compute-0 wonderful_fermat[92751]:             "lv_size": "21470642176",
Dec 11 09:16:18 compute-0 wonderful_fermat[92751]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TIjPv6-JN4c-U1Hf-F3k1-2mgj-CwGi-pbjIHh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=865a308b-cdc3-4034-b5eb-feb596b462bf,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 11 09:16:18 compute-0 wonderful_fermat[92751]:             "lv_uuid": "TIjPv6-JN4c-U1Hf-F3k1-2mgj-CwGi-pbjIHh",
Dec 11 09:16:18 compute-0 wonderful_fermat[92751]:             "name": "ceph_lv0",
Dec 11 09:16:18 compute-0 wonderful_fermat[92751]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 11 09:16:18 compute-0 wonderful_fermat[92751]:             "tags": {
Dec 11 09:16:18 compute-0 wonderful_fermat[92751]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 11 09:16:18 compute-0 wonderful_fermat[92751]:                 "ceph.block_uuid": "TIjPv6-JN4c-U1Hf-F3k1-2mgj-CwGi-pbjIHh",
Dec 11 09:16:18 compute-0 wonderful_fermat[92751]:                 "ceph.cephx_lockbox_secret": "",
Dec 11 09:16:18 compute-0 wonderful_fermat[92751]:                 "ceph.cluster_fsid": "31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060",
Dec 11 09:16:18 compute-0 wonderful_fermat[92751]:                 "ceph.cluster_name": "ceph",
Dec 11 09:16:18 compute-0 wonderful_fermat[92751]:                 "ceph.crush_device_class": "",
Dec 11 09:16:18 compute-0 wonderful_fermat[92751]:                 "ceph.encrypted": "0",
Dec 11 09:16:18 compute-0 wonderful_fermat[92751]:                 "ceph.osd_fsid": "865a308b-cdc3-4034-b5eb-feb596b462bf",
Dec 11 09:16:18 compute-0 wonderful_fermat[92751]:                 "ceph.osd_id": "1",
Dec 11 09:16:18 compute-0 wonderful_fermat[92751]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 11 09:16:18 compute-0 wonderful_fermat[92751]:                 "ceph.type": "block",
Dec 11 09:16:18 compute-0 wonderful_fermat[92751]:                 "ceph.vdo": "0",
Dec 11 09:16:18 compute-0 wonderful_fermat[92751]:                 "ceph.with_tpm": "0"
Dec 11 09:16:18 compute-0 wonderful_fermat[92751]:             },
Dec 11 09:16:18 compute-0 wonderful_fermat[92751]:             "type": "block",
Dec 11 09:16:18 compute-0 wonderful_fermat[92751]:             "vg_name": "ceph_vg0"
Dec 11 09:16:18 compute-0 wonderful_fermat[92751]:         }
Dec 11 09:16:18 compute-0 wonderful_fermat[92751]:     ]
Dec 11 09:16:18 compute-0 wonderful_fermat[92751]: }
Dec 11 09:16:18 compute-0 systemd[1]: libpod-ef8771cdaa000be12a26f2e86fe1aa04fe7e9fa78a23496a53c5b334d762436f.scope: Deactivated successfully.
Dec 11 09:16:18 compute-0 podman[92737]: 2025-12-11 09:16:18.39229168 +0000 UTC m=+0.475526817 container died ef8771cdaa000be12a26f2e86fe1aa04fe7e9fa78a23496a53c5b334d762436f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec 11 09:16:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-e59be6dc30a4633689ac2ad7b040bcb95add66ad5e86628bff82bddeaf21bbbb-merged.mount: Deactivated successfully.
Dec 11 09:16:18 compute-0 podman[92737]: 2025-12-11 09:16:18.441060807 +0000 UTC m=+0.524295944 container remove ef8771cdaa000be12a26f2e86fe1aa04fe7e9fa78a23496a53c5b334d762436f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_fermat, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:16:18 compute-0 systemd[1]: libpod-conmon-ef8771cdaa000be12a26f2e86fe1aa04fe7e9fa78a23496a53c5b334d762436f.scope: Deactivated successfully.
Dec 11 09:16:18 compute-0 sudo[92632]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:18 compute-0 sudo[92774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:16:18 compute-0 sudo[92774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:18 compute-0 sudo[92774]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:18 compute-0 sudo[92799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -- raw list --format json
Dec 11 09:16:18 compute-0 sudo[92799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:18 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v18: 194 pgs: 194 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:16:19 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:16:19 compute-0 podman[92862]: 2025-12-11 09:16:19.061778937 +0000 UTC m=+0.045610079 container create 32c9bb3ae4f861b3c976255517f1119791380c95ef24c595b1fa366adac30015 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_rubin, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 11 09:16:19 compute-0 systemd[1]: Started libpod-conmon-32c9bb3ae4f861b3c976255517f1119791380c95ef24c595b1fa366adac30015.scope.
Dec 11 09:16:19 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:16:19 compute-0 podman[92862]: 2025-12-11 09:16:19.040333955 +0000 UTC m=+0.024165127 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:16:19 compute-0 podman[92862]: 2025-12-11 09:16:19.142507854 +0000 UTC m=+0.126339016 container init 32c9bb3ae4f861b3c976255517f1119791380c95ef24c595b1fa366adac30015 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 11 09:16:19 compute-0 podman[92862]: 2025-12-11 09:16:19.150445313 +0000 UTC m=+0.134276455 container start 32c9bb3ae4f861b3c976255517f1119791380c95ef24c595b1fa366adac30015 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_rubin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:16:19 compute-0 podman[92862]: 2025-12-11 09:16:19.153456467 +0000 UTC m=+0.137287649 container attach 32c9bb3ae4f861b3c976255517f1119791380c95ef24c595b1fa366adac30015 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 11 09:16:19 compute-0 systemd[1]: libpod-32c9bb3ae4f861b3c976255517f1119791380c95ef24c595b1fa366adac30015.scope: Deactivated successfully.
Dec 11 09:16:19 compute-0 dazzling_rubin[92878]: 167 167
Dec 11 09:16:19 compute-0 podman[92862]: 2025-12-11 09:16:19.156306596 +0000 UTC m=+0.140137738 container died 32c9bb3ae4f861b3c976255517f1119791380c95ef24c595b1fa366adac30015 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_rubin, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:16:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-3624f27d41876a2e34bc9578cbc3d6b751a0e6f531615349ffa7d396d747149e-merged.mount: Deactivated successfully.
Dec 11 09:16:19 compute-0 podman[92862]: 2025-12-11 09:16:19.194213554 +0000 UTC m=+0.178044696 container remove 32c9bb3ae4f861b3c976255517f1119791380c95ef24c595b1fa366adac30015 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_rubin, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 11 09:16:19 compute-0 systemd[1]: libpod-conmon-32c9bb3ae4f861b3c976255517f1119791380c95ef24c595b1fa366adac30015.scope: Deactivated successfully.
Dec 11 09:16:19 compute-0 podman[92901]: 2025-12-11 09:16:19.365449253 +0000 UTC m=+0.046532507 container create a39372d329f2ec36e248e19034b764bf89afcd984523b531190d993dedcd4362 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_benz, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 11 09:16:19 compute-0 podman[92901]: 2025-12-11 09:16:19.345034375 +0000 UTC m=+0.026117649 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:16:19 compute-0 systemd[1]: Started libpod-conmon-a39372d329f2ec36e248e19034b764bf89afcd984523b531190d993dedcd4362.scope.
Dec 11 09:16:19 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:16:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56838b7b68340233bc0a11cb0c1da97da22f58b9b04968722a7320df7da1dbff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 11 09:16:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56838b7b68340233bc0a11cb0c1da97da22f58b9b04968722a7320df7da1dbff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:16:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56838b7b68340233bc0a11cb0c1da97da22f58b9b04968722a7320df7da1dbff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:16:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56838b7b68340233bc0a11cb0c1da97da22f58b9b04968722a7320df7da1dbff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 11 09:16:19 compute-0 podman[92901]: 2025-12-11 09:16:19.58990392 +0000 UTC m=+0.270987204 container init a39372d329f2ec36e248e19034b764bf89afcd984523b531190d993dedcd4362 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec 11 09:16:19 compute-0 podman[92901]: 2025-12-11 09:16:19.597493997 +0000 UTC m=+0.278577251 container start a39372d329f2ec36e248e19034b764bf89afcd984523b531190d993dedcd4362 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_benz, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:16:19 compute-0 podman[92901]: 2025-12-11 09:16:19.601439151 +0000 UTC m=+0.282522425 container attach a39372d329f2ec36e248e19034b764bf89afcd984523b531190d993dedcd4362 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_benz, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 11 09:16:20 compute-0 ceph-mon[74426]: pgmap v18: 194 pgs: 194 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:16:20 compute-0 lvm[92991]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 11 09:16:20 compute-0 lvm[92991]: VG ceph_vg0 finished
Dec 11 09:16:20 compute-0 gracious_benz[92917]: {}
Dec 11 09:16:20 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 11 09:16:20 compute-0 systemd[1]: libpod-a39372d329f2ec36e248e19034b764bf89afcd984523b531190d993dedcd4362.scope: Deactivated successfully.
Dec 11 09:16:20 compute-0 systemd[1]: libpod-a39372d329f2ec36e248e19034b764bf89afcd984523b531190d993dedcd4362.scope: Consumed 1.527s CPU time.
Dec 11 09:16:20 compute-0 podman[92901]: 2025-12-11 09:16:20.5357851 +0000 UTC m=+1.216868384 container died a39372d329f2ec36e248e19034b764bf89afcd984523b531190d993dedcd4362 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_benz, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:16:20 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:20 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 11 09:16:20 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-56838b7b68340233bc0a11cb0c1da97da22f58b9b04968722a7320df7da1dbff-merged.mount: Deactivated successfully.
Dec 11 09:16:20 compute-0 podman[92901]: 2025-12-11 09:16:20.592989651 +0000 UTC m=+1.274072905 container remove a39372d329f2ec36e248e19034b764bf89afcd984523b531190d993dedcd4362 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_benz, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:16:20 compute-0 systemd[1]: libpod-conmon-a39372d329f2ec36e248e19034b764bf89afcd984523b531190d993dedcd4362.scope: Deactivated successfully.
Dec 11 09:16:20 compute-0 sudo[92799]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:20 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 11 09:16:20 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:20 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 11 09:16:20 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:20 compute-0 ceph-mgr[74715]: [progress INFO root] update: starting ev ba601e07-6e92-4b80-b0e9-c2a7b2cb020f (Updating rgw.rgw deployment (+3 -> 3))
Dec 11 09:16:20 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.aenhnr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Dec 11 09:16:20 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.aenhnr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 11 09:16:20 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.aenhnr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 11 09:16:20 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Dec 11 09:16:20 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:20 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 11 09:16:20 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:16:20 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-2.aenhnr on compute-2
Dec 11 09:16:20 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-2.aenhnr on compute-2
Dec 11 09:16:20 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v19: 194 pgs: 194 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:16:21 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:21 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:21 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:21 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:21 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.aenhnr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 11 09:16:21 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.aenhnr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 11 09:16:21 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:21 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:16:22 compute-0 ceph-mon[74426]: Deploying daemon rgw.rgw.compute-2.aenhnr on compute-2
Dec 11 09:16:22 compute-0 ceph-mon[74426]: pgmap v19: 194 pgs: 194 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:16:22 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 11 09:16:22 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:22 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 11 09:16:22 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:22 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec 11 09:16:22 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:22 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.hnfveq", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Dec 11 09:16:22 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.hnfveq", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 11 09:16:22 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.hnfveq", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 11 09:16:22 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Dec 11 09:16:22 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:22 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 11 09:16:22 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:16:22 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-1.hnfveq on compute-1
Dec 11 09:16:22 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-1.hnfveq on compute-1
Dec 11 09:16:22 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v20: 194 pgs: 194 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:16:23 compute-0 sudo[93031]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kflxeshjuddherbtvpfyjbldmcdgwuin ; /usr/bin/python3'
Dec 11 09:16:23 compute-0 sudo[93031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:16:23 compute-0 python3[93033]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps --daemon-type rgw --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:16:23 compute-0 podman[93034]: 2025-12-11 09:16:23.28170901 +0000 UTC m=+0.052548786 container create 3bf515006fcf022773d1ed6ec54910258f7d805a886e22282c72b58c00db11eb (image=quay.io/ceph/ceph:v19, name=xenodochial_tharp, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:16:23 compute-0 systemd[1]: Started libpod-conmon-3bf515006fcf022773d1ed6ec54910258f7d805a886e22282c72b58c00db11eb.scope.
Dec 11 09:16:23 compute-0 podman[93034]: 2025-12-11 09:16:23.254696484 +0000 UTC m=+0.025536280 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:16:23 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:16:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e384c2747abf2e004ae215f51868d6f1f3803f7668de8e4583f2484b26e9d3e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:16:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e384c2747abf2e004ae215f51868d6f1f3803f7668de8e4583f2484b26e9d3e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:16:23 compute-0 podman[93034]: 2025-12-11 09:16:23.422055493 +0000 UTC m=+0.192895289 container init 3bf515006fcf022773d1ed6ec54910258f7d805a886e22282c72b58c00db11eb (image=quay.io/ceph/ceph:v19, name=xenodochial_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:16:23 compute-0 podman[93034]: 2025-12-11 09:16:23.429995352 +0000 UTC m=+0.200835128 container start 3bf515006fcf022773d1ed6ec54910258f7d805a886e22282c72b58c00db11eb (image=quay.io/ceph/ceph:v19, name=xenodochial_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:16:23 compute-0 podman[93034]: 2025-12-11 09:16:23.458214135 +0000 UTC m=+0.229053911 container attach 3bf515006fcf022773d1ed6ec54910258f7d805a886e22282c72b58c00db11eb (image=quay.io/ceph/ceph:v19, name=xenodochial_tharp, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:16:23 compute-0 xenodochial_tharp[93049]: ERROR: invalid flag --daemon-type
Dec 11 09:16:23 compute-0 systemd[1]: libpod-3bf515006fcf022773d1ed6ec54910258f7d805a886e22282c72b58c00db11eb.scope: Deactivated successfully.
Dec 11 09:16:23 compute-0 podman[93034]: 2025-12-11 09:16:23.497235867 +0000 UTC m=+0.268075643 container died 3bf515006fcf022773d1ed6ec54910258f7d805a886e22282c72b58c00db11eb (image=quay.io/ceph/ceph:v19, name=xenodochial_tharp, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec 11 09:16:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e384c2747abf2e004ae215f51868d6f1f3803f7668de8e4583f2484b26e9d3e-merged.mount: Deactivated successfully.
Dec 11 09:16:23 compute-0 podman[93034]: 2025-12-11 09:16:23.645834138 +0000 UTC m=+0.416673924 container remove 3bf515006fcf022773d1ed6ec54910258f7d805a886e22282c72b58c00db11eb (image=quay.io/ceph/ceph:v19, name=xenodochial_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec 11 09:16:23 compute-0 systemd[1]: libpod-conmon-3bf515006fcf022773d1ed6ec54910258f7d805a886e22282c72b58c00db11eb.scope: Deactivated successfully.
Dec 11 09:16:23 compute-0 sudo[93031]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:23 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Dec 11 09:16:23 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:23 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:23 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:23 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.hnfveq", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 11 09:16:23 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.hnfveq", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 11 09:16:23 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:23 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:16:23 compute-0 ceph-mon[74426]: Deploying daemon rgw.rgw.compute-1.hnfveq on compute-1
Dec 11 09:16:23 compute-0 ceph-mon[74426]: pgmap v20: 194 pgs: 194 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:16:23 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Dec 11 09:16:23 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Dec 11 09:16:23 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0)
Dec 11 09:16:23 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.aenhnr' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Dec 11 09:16:23 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 48 pg[9.0( empty local-lis/les=0/0 n=0 ec=48/48 lis/c=0/0 les/c/f=0/0/0 sis=48) [1] r=0 lpr=48 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:16:24 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:16:24 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 11 09:16:24 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v22: 195 pgs: 1 unknown, 194 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:16:24 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Dec 11 09:16:24 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:24 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 11 09:16:24 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.aenhnr' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Dec 11 09:16:24 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Dec 11 09:16:24 compute-0 ceph-mon[74426]: osdmap e48: 3 total, 3 up, 3 in
Dec 11 09:16:24 compute-0 ceph-mon[74426]: from='client.? 192.168.122.102:0/1385895065' entity='client.rgw.rgw.compute-2.aenhnr' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Dec 11 09:16:24 compute-0 ceph-mon[74426]: from='client.? ' entity='client.rgw.rgw.compute-2.aenhnr' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Dec 11 09:16:24 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Dec 11 09:16:24 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 49 pg[9.0( empty local-lis/les=48/49 n=0 ec=48/48 lis/c=0/0 les/c/f=0/0/0 sis=48) [1] r=0 lpr=48 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:16:24 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:24 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec 11 09:16:24 compute-0 ceph-mon[74426]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 11 09:16:25 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:25 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.dblyhr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Dec 11 09:16:25 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.dblyhr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 11 09:16:25 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.dblyhr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 11 09:16:25 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Dec 11 09:16:25 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:25 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 11 09:16:25 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:16:25 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.dblyhr on compute-0
Dec 11 09:16:25 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.dblyhr on compute-0
Dec 11 09:16:25 compute-0 sudo[93083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:16:25 compute-0 sudo[93083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:25 compute-0 sudo[93083]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:25 compute-0 sudo[93108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:16:25 compute-0 sudo[93108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:25 compute-0 podman[93176]: 2025-12-11 09:16:25.845800747 +0000 UTC m=+0.081992168 container create 0ab2a96256d1b5c8750629465b54e739fcc6e97e0aa08eac3056f8628be9a074 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default)
Dec 11 09:16:25 compute-0 podman[93176]: 2025-12-11 09:16:25.792011913 +0000 UTC m=+0.028203354 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:16:25 compute-0 systemd[1]: Started libpod-conmon-0ab2a96256d1b5c8750629465b54e739fcc6e97e0aa08eac3056f8628be9a074.scope.
Dec 11 09:16:25 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Dec 11 09:16:25 compute-0 ceph-mon[74426]: pgmap v22: 195 pgs: 1 unknown, 194 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:16:25 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:25 compute-0 ceph-mon[74426]: from='client.? ' entity='client.rgw.rgw.compute-2.aenhnr' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Dec 11 09:16:25 compute-0 ceph-mon[74426]: osdmap e49: 3 total, 3 up, 3 in
Dec 11 09:16:25 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:25 compute-0 ceph-mon[74426]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 11 09:16:25 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:25 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.dblyhr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 11 09:16:25 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.dblyhr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 11 09:16:25 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:25 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:16:25 compute-0 ceph-mon[74426]: Deploying daemon rgw.rgw.compute-0.dblyhr on compute-0
Dec 11 09:16:25 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:16:25 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Dec 11 09:16:25 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Dec 11 09:16:25 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Dec 11 09:16:25 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.aenhnr' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec 11 09:16:25 compute-0 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 11 09:16:25 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Dec 11 09:16:25 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.hnfveq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec 11 09:16:25 compute-0 podman[93176]: 2025-12-11 09:16:25.953124596 +0000 UTC m=+0.189316037 container init 0ab2a96256d1b5c8750629465b54e739fcc6e97e0aa08eac3056f8628be9a074 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_feynman, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 11 09:16:25 compute-0 podman[93176]: 2025-12-11 09:16:25.961841969 +0000 UTC m=+0.198033410 container start 0ab2a96256d1b5c8750629465b54e739fcc6e97e0aa08eac3056f8628be9a074 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_feynman, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 11 09:16:25 compute-0 podman[93176]: 2025-12-11 09:16:25.966729522 +0000 UTC m=+0.202920973 container attach 0ab2a96256d1b5c8750629465b54e739fcc6e97e0aa08eac3056f8628be9a074 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_feynman, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:16:25 compute-0 charming_feynman[93192]: 167 167
Dec 11 09:16:25 compute-0 systemd[1]: libpod-0ab2a96256d1b5c8750629465b54e739fcc6e97e0aa08eac3056f8628be9a074.scope: Deactivated successfully.
Dec 11 09:16:25 compute-0 podman[93176]: 2025-12-11 09:16:25.970542502 +0000 UTC m=+0.206733923 container died 0ab2a96256d1b5c8750629465b54e739fcc6e97e0aa08eac3056f8628be9a074 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_feynman, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Dec 11 09:16:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc2666841314508acaf6c38836c3ef4d22c36b0884fcccde33ebc2d0a085f51d-merged.mount: Deactivated successfully.
Dec 11 09:16:26 compute-0 podman[93176]: 2025-12-11 09:16:26.010054319 +0000 UTC m=+0.246245740 container remove 0ab2a96256d1b5c8750629465b54e739fcc6e97e0aa08eac3056f8628be9a074 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_feynman, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec 11 09:16:26 compute-0 systemd[1]: libpod-conmon-0ab2a96256d1b5c8750629465b54e739fcc6e97e0aa08eac3056f8628be9a074.scope: Deactivated successfully.
Dec 11 09:16:26 compute-0 systemd[1]: Reloading.
Dec 11 09:16:26 compute-0 systemd-rc-local-generator[93232]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:16:26 compute-0 systemd-sysv-generator[93238]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:16:26 compute-0 systemd[1]: Reloading.
Dec 11 09:16:26 compute-0 systemd-rc-local-generator[93276]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:16:26 compute-0 systemd-sysv-generator[93280]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:16:26 compute-0 systemd[1]: Starting Ceph rgw.rgw.compute-0.dblyhr for 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060...
Dec 11 09:16:26 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v25: 196 pgs: 2 unknown, 194 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:16:26 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Dec 11 09:16:26 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.aenhnr' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec 11 09:16:26 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.hnfveq' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec 11 09:16:26 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Dec 11 09:16:26 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Dec 11 09:16:26 compute-0 ceph-mon[74426]: osdmap e50: 3 total, 3 up, 3 in
Dec 11 09:16:26 compute-0 ceph-mon[74426]: from='client.? 192.168.122.102:0/487353682' entity='client.rgw.rgw.compute-2.aenhnr' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec 11 09:16:26 compute-0 ceph-mon[74426]: from='client.? ' entity='client.rgw.rgw.compute-2.aenhnr' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec 11 09:16:26 compute-0 ceph-mon[74426]: from='client.? 192.168.122.101:0/3289101500' entity='client.rgw.rgw.compute-1.hnfveq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec 11 09:16:26 compute-0 ceph-mon[74426]: from='client.? ' entity='client.rgw.rgw.compute-1.hnfveq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec 11 09:16:26 compute-0 podman[93331]: 2025-12-11 09:16:26.993810935 +0000 UTC m=+0.057843662 container create 5813d597e62203c1d4a4a9ebfbb2d84855ba826fa753709f79053a59c566acc7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-rgw-rgw-compute-0-dblyhr, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec 11 09:16:27 compute-0 ceph-mgr[74715]: [progress WARNING root] Starting Global Recovery Event,2 pgs not in active + clean state
Dec 11 09:16:27 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] scanning for idle connections..
Dec 11 09:16:27 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] cleaning up connections: []
Dec 11 09:16:27 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] scanning for idle connections..
Dec 11 09:16:27 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] cleaning up connections: []
Dec 11 09:16:27 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] scanning for idle connections..
Dec 11 09:16:27 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] cleaning up connections: []
Dec 11 09:16:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ea18fa20f94c06dcad8127ff862cb152802a62109b265a5ef76e853232b7938/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:16:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ea18fa20f94c06dcad8127ff862cb152802a62109b265a5ef76e853232b7938/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:16:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ea18fa20f94c06dcad8127ff862cb152802a62109b265a5ef76e853232b7938/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 11 09:16:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ea18fa20f94c06dcad8127ff862cb152802a62109b265a5ef76e853232b7938/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.dblyhr supports timestamps until 2038 (0x7fffffff)
Dec 11 09:16:27 compute-0 podman[93331]: 2025-12-11 09:16:26.971864177 +0000 UTC m=+0.035896924 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:16:27 compute-0 podman[93331]: 2025-12-11 09:16:27.076261225 +0000 UTC m=+0.140293982 container init 5813d597e62203c1d4a4a9ebfbb2d84855ba826fa753709f79053a59c566acc7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-rgw-rgw-compute-0-dblyhr, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 11 09:16:27 compute-0 podman[93331]: 2025-12-11 09:16:27.083815702 +0000 UTC m=+0.147848429 container start 5813d597e62203c1d4a4a9ebfbb2d84855ba826fa753709f79053a59c566acc7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-rgw-rgw-compute-0-dblyhr, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec 11 09:16:27 compute-0 bash[93331]: 5813d597e62203c1d4a4a9ebfbb2d84855ba826fa753709f79053a59c566acc7
Dec 11 09:16:27 compute-0 systemd[1]: Started Ceph rgw.rgw.compute-0.dblyhr for 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060.
Dec 11 09:16:27 compute-0 radosgw[93354]: deferred set uid:gid to 167:167 (ceph:ceph)
Dec 11 09:16:27 compute-0 radosgw[93354]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process radosgw, pid 2
Dec 11 09:16:27 compute-0 radosgw[93354]: framework: beast
Dec 11 09:16:27 compute-0 radosgw[93354]: framework conf key: endpoint, val: 192.168.122.100:8082
Dec 11 09:16:27 compute-0 radosgw[93354]: init_numa not setting numa affinity
Dec 11 09:16:27 compute-0 sudo[93108]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:27 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 11 09:16:27 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:27 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 11 09:16:27 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:27 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec 11 09:16:27 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:27 compute-0 ceph-mgr[74715]: [progress INFO root] complete: finished ev ba601e07-6e92-4b80-b0e9-c2a7b2cb020f (Updating rgw.rgw deployment (+3 -> 3))
Dec 11 09:16:27 compute-0 ceph-mgr[74715]: [progress INFO root] Completed event ba601e07-6e92-4b80-b0e9-c2a7b2cb020f (Updating rgw.rgw deployment (+3 -> 3)) in 7 seconds
Dec 11 09:16:27 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec 11 09:16:27 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec 11 09:16:27 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec 11 09:16:27 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:27 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec 11 09:16:27 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:27 compute-0 ceph-mgr[74715]: [progress INFO root] update: starting ev 0511e658-5a4e-4051-b63d-31a6ac6bf717 (Updating mds.cephfs deployment (+3 -> 3))
Dec 11 09:16:27 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.abebdg", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Dec 11 09:16:27 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.abebdg", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec 11 09:16:27 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.abebdg", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec 11 09:16:27 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 11 09:16:27 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:16:27 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-2.abebdg on compute-2
Dec 11 09:16:27 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-2.abebdg on compute-2
Dec 11 09:16:27 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Dec 11 09:16:27 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Dec 11 09:16:27 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Dec 11 09:16:28 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Dec 11 09:16:28 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4067601783' entity='client.rgw.rgw.compute-0.dblyhr' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 11 09:16:28 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Dec 11 09:16:28 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.aenhnr' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 11 09:16:28 compute-0 ceph-mon[74426]: pgmap v25: 196 pgs: 2 unknown, 194 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:16:28 compute-0 ceph-mon[74426]: from='client.? ' entity='client.rgw.rgw.compute-2.aenhnr' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec 11 09:16:28 compute-0 ceph-mon[74426]: from='client.? ' entity='client.rgw.rgw.compute-1.hnfveq' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec 11 09:16:28 compute-0 ceph-mon[74426]: osdmap e51: 3 total, 3 up, 3 in
Dec 11 09:16:28 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:28 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:28 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:28 compute-0 ceph-mon[74426]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec 11 09:16:28 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:28 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:28 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.abebdg", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec 11 09:16:28 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.abebdg", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec 11 09:16:28 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:16:28 compute-0 ceph-mon[74426]: Deploying daemon mds.cephfs.compute-2.abebdg on compute-2
Dec 11 09:16:28 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Dec 11 09:16:28 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.hnfveq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 11 09:16:28 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 52 pg[11.0( empty local-lis/les=0/0 n=0 ec=52/52 lis/c=0/0 les/c/f=0/0/0 sis=52) [1] r=0 lpr=52 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:16:28 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v28: 197 pgs: 1 unknown, 196 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 2.0 KiB/s wr, 11 op/s
Dec 11 09:16:28 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Dec 11 09:16:29 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4067601783' entity='client.rgw.rgw.compute-0.dblyhr' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec 11 09:16:29 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.aenhnr' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec 11 09:16:29 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.hnfveq' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec 11 09:16:29 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Dec 11 09:16:29 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Dec 11 09:16:29 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 53 pg[11.0( empty local-lis/les=52/53 n=0 ec=52/52 lis/c=0/0 les/c/f=0/0/0 sis=52) [1] r=0 lpr=52 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:16:29 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:16:29 compute-0 ceph-mon[74426]: osdmap e52: 3 total, 3 up, 3 in
Dec 11 09:16:29 compute-0 ceph-mon[74426]: from='client.? 192.168.122.102:0/487353682' entity='client.rgw.rgw.compute-2.aenhnr' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 11 09:16:29 compute-0 ceph-mon[74426]: from='client.? 192.168.122.100:0/4067601783' entity='client.rgw.rgw.compute-0.dblyhr' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 11 09:16:29 compute-0 ceph-mon[74426]: from='client.? ' entity='client.rgw.rgw.compute-2.aenhnr' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 11 09:16:29 compute-0 ceph-mon[74426]: from='client.? 192.168.122.101:0/3289101500' entity='client.rgw.rgw.compute-1.hnfveq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 11 09:16:29 compute-0 ceph-mon[74426]: from='client.? ' entity='client.rgw.rgw.compute-1.hnfveq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 11 09:16:29 compute-0 ceph-mon[74426]: from='client.? 192.168.122.100:0/4067601783' entity='client.rgw.rgw.compute-0.dblyhr' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec 11 09:16:29 compute-0 ceph-mon[74426]: from='client.? ' entity='client.rgw.rgw.compute-2.aenhnr' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec 11 09:16:29 compute-0 ceph-mon[74426]: from='client.? ' entity='client.rgw.rgw.compute-1.hnfveq' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec 11 09:16:29 compute-0 ceph-mon[74426]: osdmap e53: 3 total, 3 up, 3 in
Dec 11 09:16:29 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 11 09:16:29 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:29 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 11 09:16:29 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:29 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec 11 09:16:29 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:29 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.ejykhm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Dec 11 09:16:29 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.ejykhm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec 11 09:16:29 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.ejykhm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec 11 09:16:29 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 11 09:16:29 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:16:29 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.ejykhm on compute-0
Dec 11 09:16:29 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.ejykhm on compute-0
Dec 11 09:16:29 compute-0 sudo[93950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:16:29 compute-0 sudo[93950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:29 compute-0 sudo[93950]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:29 compute-0 sudo[93975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:16:29 compute-0 sudo[93975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:29 compute-0 podman[94039]: 2025-12-11 09:16:29.852871545 +0000 UTC m=+0.057986446 container create 6730b79307449587a23f2c546a0328265665689b88b8be9c820c503285828c27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_raman, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Dec 11 09:16:29 compute-0 systemd[1]: Started libpod-conmon-6730b79307449587a23f2c546a0328265665689b88b8be9c820c503285828c27.scope.
Dec 11 09:16:29 compute-0 podman[94039]: 2025-12-11 09:16:29.827343156 +0000 UTC m=+0.032458077 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:16:29 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:16:29 compute-0 podman[94039]: 2025-12-11 09:16:29.953135834 +0000 UTC m=+0.158250765 container init 6730b79307449587a23f2c546a0328265665689b88b8be9c820c503285828c27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_raman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:16:29 compute-0 podman[94039]: 2025-12-11 09:16:29.959733071 +0000 UTC m=+0.164847972 container start 6730b79307449587a23f2c546a0328265665689b88b8be9c820c503285828c27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_raman, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:16:29 compute-0 podman[94039]: 2025-12-11 09:16:29.963396166 +0000 UTC m=+0.168511097 container attach 6730b79307449587a23f2c546a0328265665689b88b8be9c820c503285828c27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Dec 11 09:16:29 compute-0 strange_raman[94055]: 167 167
Dec 11 09:16:29 compute-0 systemd[1]: libpod-6730b79307449587a23f2c546a0328265665689b88b8be9c820c503285828c27.scope: Deactivated successfully.
Dec 11 09:16:29 compute-0 podman[94039]: 2025-12-11 09:16:29.965735099 +0000 UTC m=+0.170850000 container died 6730b79307449587a23f2c546a0328265665689b88b8be9c820c503285828c27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 11 09:16:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-80898aaae299be5b4b8b2ff960d8aa6acbb27627e6cf9e3107889c40c940e194-merged.mount: Deactivated successfully.
Dec 11 09:16:30 compute-0 podman[94039]: 2025-12-11 09:16:30.003658695 +0000 UTC m=+0.208773596 container remove 6730b79307449587a23f2c546a0328265665689b88b8be9c820c503285828c27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_raman, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 11 09:16:30 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Dec 11 09:16:30 compute-0 systemd[1]: libpod-conmon-6730b79307449587a23f2c546a0328265665689b88b8be9c820c503285828c27.scope: Deactivated successfully.
Dec 11 09:16:30 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Dec 11 09:16:30 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Dec 11 09:16:30 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Dec 11 09:16:30 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4067601783' entity='client.rgw.rgw.compute-0.dblyhr' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec 11 09:16:30 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Dec 11 09:16:30 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.hnfveq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec 11 09:16:30 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Dec 11 09:16:30 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.aenhnr' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec 11 09:16:30 compute-0 systemd[1]: Reloading.
Dec 11 09:16:30 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).mds e3 new map
Dec 11 09:16:30 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).mds e3 print_map
                                           e3
                                           btime 2025-12-11T09:16:30:087627+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-11T09:15:57.915143+0000
                                           modified        2025-12-11T09:15:57.915143+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-2.abebdg{-1:24187} state up:standby seq 1 addr [v2:192.168.122.102:6804/1341793648,v1:192.168.122.102:6805/1341793648] compat {c=[1],r=[1],i=[1fff]}]
Dec 11 09:16:30 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/1341793648,v1:192.168.122.102:6805/1341793648] up:boot
Dec 11 09:16:30 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.102:6804/1341793648,v1:192.168.122.102:6805/1341793648] as mds.0
Dec 11 09:16:30 compute-0 ceph-mon[74426]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.abebdg assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Dec 11 09:16:30 compute-0 ceph-mon[74426]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Dec 11 09:16:30 compute-0 ceph-mon[74426]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Dec 11 09:16:30 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Dec 11 09:16:30 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.abebdg"} v 0)
Dec 11 09:16:30 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.abebdg"}]: dispatch
Dec 11 09:16:30 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).mds e3 all = 0
Dec 11 09:16:30 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).mds e4 new map
Dec 11 09:16:30 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).mds e4 print_map
                                           e4
                                           btime 2025-12-11T09:16:30:098944+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-11T09:15:57.915143+0000
                                           modified        2025-12-11T09:16:30.098928+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24187}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                           [mds.cephfs.compute-2.abebdg{0:24187} state up:creating seq 1 addr [v2:192.168.122.102:6804/1341793648,v1:192.168.122.102:6805/1341793648] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
Dec 11 09:16:30 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.abebdg=up:creating}
Dec 11 09:16:30 compute-0 systemd-sysv-generator[94099]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:16:30 compute-0 systemd-rc-local-generator[94090]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:16:30 compute-0 ceph-mon[74426]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.abebdg is now active in filesystem cephfs as rank 0
Dec 11 09:16:30 compute-0 ceph-mon[74426]: pgmap v28: 197 pgs: 1 unknown, 196 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 2.0 KiB/s wr, 11 op/s
Dec 11 09:16:30 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:30 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:30 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:30 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.ejykhm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec 11 09:16:30 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.ejykhm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec 11 09:16:30 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:16:30 compute-0 ceph-mon[74426]: Deploying daemon mds.cephfs.compute-0.ejykhm on compute-0
Dec 11 09:16:30 compute-0 ceph-mon[74426]: osdmap e54: 3 total, 3 up, 3 in
Dec 11 09:16:30 compute-0 ceph-mon[74426]: from='client.? 192.168.122.100:0/4067601783' entity='client.rgw.rgw.compute-0.dblyhr' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec 11 09:16:30 compute-0 ceph-mon[74426]: from='client.? 192.168.122.101:0/3289101500' entity='client.rgw.rgw.compute-1.hnfveq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec 11 09:16:30 compute-0 ceph-mon[74426]: from='client.? ' entity='client.rgw.rgw.compute-1.hnfveq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec 11 09:16:30 compute-0 ceph-mon[74426]: from='client.? 192.168.122.102:0/487353682' entity='client.rgw.rgw.compute-2.aenhnr' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec 11 09:16:30 compute-0 ceph-mon[74426]: from='client.? ' entity='client.rgw.rgw.compute-2.aenhnr' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec 11 09:16:30 compute-0 ceph-mon[74426]: mds.? [v2:192.168.122.102:6804/1341793648,v1:192.168.122.102:6805/1341793648] up:boot
Dec 11 09:16:30 compute-0 ceph-mon[74426]: daemon mds.cephfs.compute-2.abebdg assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Dec 11 09:16:30 compute-0 ceph-mon[74426]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Dec 11 09:16:30 compute-0 ceph-mon[74426]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Dec 11 09:16:30 compute-0 ceph-mon[74426]: fsmap cephfs:0 1 up:standby
Dec 11 09:16:30 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.abebdg"}]: dispatch
Dec 11 09:16:30 compute-0 ceph-mon[74426]: fsmap cephfs:1 {0=cephfs.compute-2.abebdg=up:creating}
Dec 11 09:16:30 compute-0 ceph-mon[74426]: daemon mds.cephfs.compute-2.abebdg is now active in filesystem cephfs as rank 0
Dec 11 09:16:30 compute-0 systemd[1]: Reloading.
Dec 11 09:16:30 compute-0 systemd-rc-local-generator[94137]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:16:30 compute-0 systemd-sysv-generator[94142]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:16:30 compute-0 systemd[1]: Starting Ceph mds.cephfs.compute-0.ejykhm for 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060...
Dec 11 09:16:30 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v31: 198 pgs: 2 unknown, 196 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 2.0 KiB/s wr, 11 op/s
Dec 11 09:16:30 compute-0 podman[94197]: 2025-12-11 09:16:30.92282993 +0000 UTC m=+0.044189144 container create 4ac55a0a45869656477ce43ac813f5b25a13b5c736d50b228a1d873d1c14738a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mds-cephfs-compute-0-ejykhm, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:16:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a9e308231ad6e525066a035e6dff1782feb986ef875960f550b0dc124bc6762/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:16:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a9e308231ad6e525066a035e6dff1782feb986ef875960f550b0dc124bc6762/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:16:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a9e308231ad6e525066a035e6dff1782feb986ef875960f550b0dc124bc6762/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 11 09:16:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a9e308231ad6e525066a035e6dff1782feb986ef875960f550b0dc124bc6762/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.ejykhm supports timestamps until 2038 (0x7fffffff)
Dec 11 09:16:30 compute-0 podman[94197]: 2025-12-11 09:16:30.993646707 +0000 UTC m=+0.115005941 container init 4ac55a0a45869656477ce43ac813f5b25a13b5c736d50b228a1d873d1c14738a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mds-cephfs-compute-0-ejykhm, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 11 09:16:31 compute-0 podman[94197]: 2025-12-11 09:16:30.902928317 +0000 UTC m=+0.024287551 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:16:31 compute-0 podman[94197]: 2025-12-11 09:16:31.000385017 +0000 UTC m=+0.121744241 container start 4ac55a0a45869656477ce43ac813f5b25a13b5c736d50b228a1d873d1c14738a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mds-cephfs-compute-0-ejykhm, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec 11 09:16:31 compute-0 bash[94197]: 4ac55a0a45869656477ce43ac813f5b25a13b5c736d50b228a1d873d1c14738a
Dec 11 09:16:31 compute-0 systemd[1]: Started Ceph mds.cephfs.compute-0.ejykhm for 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060.
Dec 11 09:16:31 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Dec 11 09:16:31 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4067601783' entity='client.rgw.rgw.compute-0.dblyhr' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec 11 09:16:31 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.hnfveq' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec 11 09:16:31 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.aenhnr' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec 11 09:16:31 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Dec 11 09:16:31 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Dec 11 09:16:31 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Dec 11 09:16:31 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.aenhnr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec 11 09:16:31 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Dec 11 09:16:31 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.hnfveq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec 11 09:16:31 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Dec 11 09:16:31 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4067601783' entity='client.rgw.rgw.compute-0.dblyhr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec 11 09:16:31 compute-0 ceph-mds[94216]: set uid:gid to 167:167 (ceph:ceph)
Dec 11 09:16:31 compute-0 ceph-mds[94216]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mds, pid 2
Dec 11 09:16:31 compute-0 ceph-mds[94216]: main not setting numa affinity
Dec 11 09:16:31 compute-0 ceph-mds[94216]: pidfile_write: ignore empty --pid-file
Dec 11 09:16:31 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mds-cephfs-compute-0-ejykhm[94212]: starting mds.cephfs.compute-0.ejykhm at 
Dec 11 09:16:31 compute-0 ceph-mds[94216]: mds.cephfs.compute-0.ejykhm Updating MDS map to version 4 from mon.0
Dec 11 09:16:31 compute-0 sudo[93975]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:31 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 11 09:16:31 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:31 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 11 09:16:31 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:31 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec 11 09:16:31 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).mds e5 new map
Dec 11 09:16:31 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).mds e5 print_map
                                           e5
                                           btime 2025-12-11T09:16:31:110917+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-11T09:15:57.915143+0000
                                           modified        2025-12-11T09:16:31.110914+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24187}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 24187 members: 24187
                                           [mds.cephfs.compute-2.abebdg{0:24187} state up:active seq 2 addr [v2:192.168.122.102:6804/1341793648,v1:192.168.122.102:6805/1341793648] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.ejykhm{-1:14481} state up:standby seq 1 addr [v2:192.168.122.100:6806/2227004311,v1:192.168.122.100:6807/2227004311] compat {c=[1],r=[1],i=[1fff]}]
Dec 11 09:16:31 compute-0 ceph-mds[94216]: mds.cephfs.compute-0.ejykhm Updating MDS map to version 5 from mon.0
Dec 11 09:16:31 compute-0 ceph-mds[94216]: mds.cephfs.compute-0.ejykhm Monitors have assigned me to become a standby
Dec 11 09:16:31 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/1341793648,v1:192.168.122.102:6805/1341793648] up:active
Dec 11 09:16:31 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/2227004311,v1:192.168.122.100:6807/2227004311] up:boot
Dec 11 09:16:31 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.abebdg=up:active} 1 up:standby
Dec 11 09:16:31 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.ejykhm"} v 0)
Dec 11 09:16:31 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.ejykhm"}]: dispatch
Dec 11 09:16:31 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).mds e5 all = 0
Dec 11 09:16:31 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).mds e6 new map
Dec 11 09:16:31 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).mds e6 print_map
                                           e6
                                           btime 2025-12-11T09:16:31:130086+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-11T09:15:57.915143+0000
                                           modified        2025-12-11T09:16:31.110914+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24187}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24187 members: 24187
                                           [mds.cephfs.compute-2.abebdg{0:24187} state up:active seq 2 addr [v2:192.168.122.102:6804/1341793648,v1:192.168.122.102:6805/1341793648] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.ejykhm{-1:14481} state up:standby seq 1 addr [v2:192.168.122.100:6806/2227004311,v1:192.168.122.100:6807/2227004311] compat {c=[1],r=[1],i=[1fff]}]
Dec 11 09:16:31 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:31 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.abebdg=up:active} 1 up:standby
Dec 11 09:16:31 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.hifxsh", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Dec 11 09:16:31 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.hifxsh", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec 11 09:16:31 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.hifxsh", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec 11 09:16:31 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 11 09:16:31 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:16:31 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-1.hifxsh on compute-1
Dec 11 09:16:31 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-1.hifxsh on compute-1
Dec 11 09:16:32 compute-0 ceph-mgr[74715]: [progress INFO root] Writing back 15 completed events
Dec 11 09:16:32 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Dec 11 09:16:32 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 11 09:16:32 compute-0 ceph-mon[74426]: pgmap v31: 198 pgs: 2 unknown, 196 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 2.0 KiB/s wr, 11 op/s
Dec 11 09:16:32 compute-0 ceph-mon[74426]: from='client.? 192.168.122.100:0/4067601783' entity='client.rgw.rgw.compute-0.dblyhr' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec 11 09:16:32 compute-0 ceph-mon[74426]: from='client.? ' entity='client.rgw.rgw.compute-1.hnfveq' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec 11 09:16:32 compute-0 ceph-mon[74426]: from='client.? ' entity='client.rgw.rgw.compute-2.aenhnr' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec 11 09:16:32 compute-0 ceph-mon[74426]: osdmap e55: 3 total, 3 up, 3 in
Dec 11 09:16:32 compute-0 ceph-mon[74426]: from='client.? 192.168.122.102:0/487353682' entity='client.rgw.rgw.compute-2.aenhnr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec 11 09:16:32 compute-0 ceph-mon[74426]: from='client.? ' entity='client.rgw.rgw.compute-2.aenhnr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec 11 09:16:32 compute-0 ceph-mon[74426]: from='client.? 192.168.122.101:0/3289101500' entity='client.rgw.rgw.compute-1.hnfveq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec 11 09:16:32 compute-0 ceph-mon[74426]: from='client.? ' entity='client.rgw.rgw.compute-1.hnfveq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec 11 09:16:32 compute-0 ceph-mon[74426]: from='client.? 192.168.122.100:0/4067601783' entity='client.rgw.rgw.compute-0.dblyhr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec 11 09:16:32 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:32 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:32 compute-0 ceph-mon[74426]: mds.? [v2:192.168.122.102:6804/1341793648,v1:192.168.122.102:6805/1341793648] up:active
Dec 11 09:16:32 compute-0 ceph-mon[74426]: mds.? [v2:192.168.122.100:6806/2227004311,v1:192.168.122.100:6807/2227004311] up:boot
Dec 11 09:16:32 compute-0 ceph-mon[74426]: fsmap cephfs:1 {0=cephfs.compute-2.abebdg=up:active} 1 up:standby
Dec 11 09:16:32 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.ejykhm"}]: dispatch
Dec 11 09:16:32 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:32 compute-0 ceph-mon[74426]: fsmap cephfs:1 {0=cephfs.compute-2.abebdg=up:active} 1 up:standby
Dec 11 09:16:32 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.hifxsh", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec 11 09:16:32 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.hifxsh", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec 11 09:16:32 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:16:32 compute-0 ceph-mon[74426]: Deploying daemon mds.cephfs.compute-1.hifxsh on compute-1
Dec 11 09:16:32 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.aenhnr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec 11 09:16:32 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.hnfveq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec 11 09:16:32 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4067601783' entity='client.rgw.rgw.compute-0.dblyhr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec 11 09:16:32 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Dec 11 09:16:32 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:32 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Dec 11 09:16:32 compute-0 radosgw[93354]: v1 topic migration: starting v1 topic migration..
Dec 11 09:16:32 compute-0 radosgw[93354]: LDAP not started since no server URIs were provided in the configuration.
Dec 11 09:16:32 compute-0 radosgw[93354]: v1 topic migration: finished v1 topic migration
Dec 11 09:16:32 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-rgw-rgw-compute-0-dblyhr[93350]: 2025-12-11T09:16:32.538+0000 7f37fe570980 -1 LDAP not started since no server URIs were provided in the configuration.
Dec 11 09:16:32 compute-0 radosgw[93354]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Dec 11 09:16:32 compute-0 radosgw[93354]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Dec 11 09:16:32 compute-0 radosgw[93354]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Dec 11 09:16:32 compute-0 radosgw[93354]: framework: beast
Dec 11 09:16:32 compute-0 radosgw[93354]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Dec 11 09:16:32 compute-0 radosgw[93354]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Dec 11 09:16:32 compute-0 radosgw[93354]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Dec 11 09:16:32 compute-0 radosgw[93354]: starting handler: beast
Dec 11 09:16:32 compute-0 radosgw[93354]: set uid:gid to 167:167 (ceph:ceph)
Dec 11 09:16:32 compute-0 radosgw[93354]: mgrc service_daemon_register rgw.14475 metadata {arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.dblyhr,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Dec 5 11:18:23 UTC 2025,kernel_version=5.14.0-648.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864300,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=3b269e18-0f77-4f14-9aee-ab88040a4f16,zone_name=default,zonegroup_id=dab2b214-8f75-4660-ba0c-2a653c230bd3,zonegroup_name=default}
Dec 11 09:16:32 compute-0 radosgw[93354]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Dec 11 09:16:32 compute-0 radosgw[93354]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Dec 11 09:16:32 compute-0 radosgw[93354]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Dec 11 09:16:32 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v34: 198 pgs: 2 unknown, 196 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:16:32 compute-0 radosgw[93354]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Dec 11 09:16:32 compute-0 radosgw[93354]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Dec 11 09:16:32 compute-0 radosgw[93354]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Dec 11 09:16:33 compute-0 ceph-mon[74426]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec 11 09:16:33 compute-0 ceph-mon[74426]: log_channel(cluster) log [INF] : Cluster is now healthy
Dec 11 09:16:33 compute-0 ceph-mon[74426]: from='client.? ' entity='client.rgw.rgw.compute-2.aenhnr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec 11 09:16:33 compute-0 ceph-mon[74426]: from='client.? ' entity='client.rgw.rgw.compute-1.hnfveq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec 11 09:16:33 compute-0 ceph-mon[74426]: from='client.? 192.168.122.100:0/4067601783' entity='client.rgw.rgw.compute-0.dblyhr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec 11 09:16:33 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:33 compute-0 ceph-mon[74426]: osdmap e56: 3 total, 3 up, 3 in
Dec 11 09:16:33 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 11 09:16:33 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:33 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 11 09:16:33 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:33 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec 11 09:16:33 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:33 compute-0 ceph-mgr[74715]: [progress INFO root] complete: finished ev 0511e658-5a4e-4051-b63d-31a6ac6bf717 (Updating mds.cephfs deployment (+3 -> 3))
Dec 11 09:16:33 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0)
Dec 11 09:16:33 compute-0 ceph-mgr[74715]: [progress INFO root] Completed event 0511e658-5a4e-4051-b63d-31a6ac6bf717 (Updating mds.cephfs deployment (+3 -> 3)) in 6 seconds
Dec 11 09:16:33 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:33 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec 11 09:16:33 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:33 compute-0 ceph-mgr[74715]: [progress INFO root] update: starting ev 312a98b4-d515-4188-b295-c2554b8b2982 (Updating nfs.cephfs deployment (+3 -> 3))
Dec 11 09:16:33 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 11 09:16:33 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:33 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.0.0.compute-1.vlrwzy
Dec 11 09:16:33 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.0.0.compute-1.vlrwzy
Dec 11 09:16:33 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.vlrwzy", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Dec 11 09:16:33 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.vlrwzy", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Dec 11 09:16:33 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.vlrwzy", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Dec 11 09:16:33 compute-0 ceph-mgr[74715]: [cephadm INFO root] Ensuring nfs.cephfs.0 is in the ganesha grace table
Dec 11 09:16:33 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.0 is in the ganesha grace table
Dec 11 09:16:33 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Dec 11 09:16:33 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Dec 11 09:16:33 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Dec 11 09:16:33 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 11 09:16:33 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:16:33 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Dec 11 09:16:33 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Dec 11 09:16:33 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Dec 11 09:16:33 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Dec 11 09:16:33 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Dec 11 09:16:33 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.0.0.compute-1.vlrwzy-rgw
Dec 11 09:16:33 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.0.0.compute-1.vlrwzy-rgw
Dec 11 09:16:33 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.vlrwzy-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Dec 11 09:16:33 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.vlrwzy-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 11 09:16:33 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.vlrwzy-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 11 09:16:33 compute-0 ceph-mgr[74715]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.0.0.compute-1.vlrwzy's ganesha conf is defaulting to empty
Dec 11 09:16:33 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.0.0.compute-1.vlrwzy's ganesha conf is defaulting to empty
Dec 11 09:16:33 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 11 09:16:33 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:16:33 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.0.0.compute-1.vlrwzy on compute-1
Dec 11 09:16:33 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.0.0.compute-1.vlrwzy on compute-1
Dec 11 09:16:33 compute-0 sudo[94329]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zefrurphnfovticgssprejdqivrrsxsj ; /usr/bin/python3'
Dec 11 09:16:33 compute-0 sudo[94329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:16:33 compute-0 python3[94331]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps --daemon-type rgw --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:16:33 compute-0 podman[94332]: 2025-12-11 09:16:33.986417063 +0000 UTC m=+0.052247507 container create 48ca0781ff95e4949122e6eed78e8e5080fcdf27881e1d6b20d6a6a12e42d414 (image=quay.io/ceph/ceph:v19, name=beautiful_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:16:34 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e56 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:16:34 compute-0 systemd[1]: Started libpod-conmon-48ca0781ff95e4949122e6eed78e8e5080fcdf27881e1d6b20d6a6a12e42d414.scope.
Dec 11 09:16:34 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:16:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a785c8e274ccfd2f63ede9de3be25d647cfebf345c85dfe83a38eddc1e4a575f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:16:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a785c8e274ccfd2f63ede9de3be25d647cfebf345c85dfe83a38eddc1e4a575f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:16:34 compute-0 podman[94332]: 2025-12-11 09:16:33.966126408 +0000 UTC m=+0.031956862 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:16:34 compute-0 podman[94332]: 2025-12-11 09:16:34.074285723 +0000 UTC m=+0.140116187 container init 48ca0781ff95e4949122e6eed78e8e5080fcdf27881e1d6b20d6a6a12e42d414 (image=quay.io/ceph/ceph:v19, name=beautiful_mestorf, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 11 09:16:34 compute-0 podman[94332]: 2025-12-11 09:16:34.0815124 +0000 UTC m=+0.147342834 container start 48ca0781ff95e4949122e6eed78e8e5080fcdf27881e1d6b20d6a6a12e42d414 (image=quay.io/ceph/ceph:v19, name=beautiful_mestorf, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:16:34 compute-0 podman[94332]: 2025-12-11 09:16:34.086533087 +0000 UTC m=+0.152363521 container attach 48ca0781ff95e4949122e6eed78e8e5080fcdf27881e1d6b20d6a6a12e42d414 (image=quay.io/ceph/ceph:v19, name=beautiful_mestorf, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:16:34 compute-0 beautiful_mestorf[94347]: ERROR: invalid flag --daemon-type
Dec 11 09:16:34 compute-0 systemd[1]: libpod-48ca0781ff95e4949122e6eed78e8e5080fcdf27881e1d6b20d6a6a12e42d414.scope: Deactivated successfully.
Dec 11 09:16:34 compute-0 podman[94332]: 2025-12-11 09:16:34.142917752 +0000 UTC m=+0.208748186 container died 48ca0781ff95e4949122e6eed78e8e5080fcdf27881e1d6b20d6a6a12e42d414 (image=quay.io/ceph/ceph:v19, name=beautiful_mestorf, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:16:34 compute-0 ceph-mon[74426]: pgmap v34: 198 pgs: 2 unknown, 196 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:16:34 compute-0 ceph-mon[74426]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec 11 09:16:34 compute-0 ceph-mon[74426]: Cluster is now healthy
Dec 11 09:16:34 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:34 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:34 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:34 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:34 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:34 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:34 compute-0 ceph-mon[74426]: Creating key for client.nfs.cephfs.0.0.compute-1.vlrwzy
Dec 11 09:16:34 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.vlrwzy", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Dec 11 09:16:34 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.vlrwzy", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Dec 11 09:16:34 compute-0 ceph-mon[74426]: Ensuring nfs.cephfs.0 is in the ganesha grace table
Dec 11 09:16:34 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Dec 11 09:16:34 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Dec 11 09:16:34 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:16:34 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Dec 11 09:16:34 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Dec 11 09:16:34 compute-0 ceph-mon[74426]: Rados config object exists: conf-nfs.cephfs
Dec 11 09:16:34 compute-0 ceph-mon[74426]: Creating key for client.nfs.cephfs.0.0.compute-1.vlrwzy-rgw
Dec 11 09:16:34 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.vlrwzy-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 11 09:16:34 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.vlrwzy-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 11 09:16:34 compute-0 ceph-mon[74426]: Bind address in nfs.cephfs.0.0.compute-1.vlrwzy's ganesha conf is defaulting to empty
Dec 11 09:16:34 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:16:34 compute-0 ceph-mon[74426]: Deploying daemon nfs.cephfs.0.0.compute-1.vlrwzy on compute-1
Dec 11 09:16:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-a785c8e274ccfd2f63ede9de3be25d647cfebf345c85dfe83a38eddc1e4a575f-merged.mount: Deactivated successfully.
Dec 11 09:16:34 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).mds e7 new map
Dec 11 09:16:34 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).mds e7 print_map
                                           e7
                                           btime 2025-12-11T09:16:34:169469+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        7
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-11T09:15:57.915143+0000
                                           modified        2025-12-11T09:16:34.169018+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24187}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24187 members: 24187
                                           [mds.cephfs.compute-2.abebdg{0:24187} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/1341793648,v1:192.168.122.102:6805/1341793648] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.ejykhm{-1:14481} state up:standby seq 1 addr [v2:192.168.122.100:6806/2227004311,v1:192.168.122.100:6807/2227004311] compat {c=[1],r=[1],i=[1fff]}]
                                           [mds.cephfs.compute-1.hifxsh{-1:24191} state up:standby seq 1 addr [v2:192.168.122.101:6804/3436560509,v1:192.168.122.101:6805/3436560509] compat {c=[1],r=[1],i=[1fff]}]
Dec 11 09:16:34 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/3436560509,v1:192.168.122.101:6805/3436560509] up:boot
Dec 11 09:16:34 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/1341793648,v1:192.168.122.102:6805/1341793648] up:active
Dec 11 09:16:34 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.abebdg=up:active} 2 up:standby
Dec 11 09:16:34 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-1.hifxsh"} v 0)
Dec 11 09:16:34 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.hifxsh"}]: dispatch
Dec 11 09:16:34 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).mds e7 all = 0
Dec 11 09:16:34 compute-0 podman[94332]: 2025-12-11 09:16:34.189620564 +0000 UTC m=+0.255450988 container remove 48ca0781ff95e4949122e6eed78e8e5080fcdf27881e1d6b20d6a6a12e42d414 (image=quay.io/ceph/ceph:v19, name=beautiful_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec 11 09:16:34 compute-0 systemd[1]: libpod-conmon-48ca0781ff95e4949122e6eed78e8e5080fcdf27881e1d6b20d6a6a12e42d414.scope: Deactivated successfully.
Dec 11 09:16:34 compute-0 sudo[94329]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:34 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v35: 198 pgs: 198 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 79 KiB/s rd, 8.2 KiB/s wr, 166 op/s
Dec 11 09:16:35 compute-0 ceph-mon[74426]: mds.? [v2:192.168.122.101:6804/3436560509,v1:192.168.122.101:6805/3436560509] up:boot
Dec 11 09:16:35 compute-0 ceph-mon[74426]: mds.? [v2:192.168.122.102:6804/1341793648,v1:192.168.122.102:6805/1341793648] up:active
Dec 11 09:16:35 compute-0 ceph-mon[74426]: fsmap cephfs:1 {0=cephfs.compute-2.abebdg=up:active} 2 up:standby
Dec 11 09:16:35 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.hifxsh"}]: dispatch
Dec 11 09:16:35 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).mds e8 new map
Dec 11 09:16:35 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).mds e8 print_map
                                           e8
                                           btime 2025-12-11T09:16:35:218729+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        7
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-11T09:15:57.915143+0000
                                           modified        2025-12-11T09:16:34.169018+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24187}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24187 members: 24187
                                           [mds.cephfs.compute-2.abebdg{0:24187} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/1341793648,v1:192.168.122.102:6805/1341793648] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.ejykhm{-1:14481} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/2227004311,v1:192.168.122.100:6807/2227004311] compat {c=[1],r=[1],i=[1fff]}]
                                           [mds.cephfs.compute-1.hifxsh{-1:24191} state up:standby seq 1 addr [v2:192.168.122.101:6804/3436560509,v1:192.168.122.101:6805/3436560509] compat {c=[1],r=[1],i=[1fff]}]
Dec 11 09:16:35 compute-0 ceph-mds[94216]: mds.cephfs.compute-0.ejykhm Updating MDS map to version 8 from mon.0
Dec 11 09:16:35 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/2227004311,v1:192.168.122.100:6807/2227004311] up:standby
Dec 11 09:16:35 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.abebdg=up:active} 2 up:standby
Dec 11 09:16:35 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 11 09:16:35 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:35 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 11 09:16:35 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:35 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 11 09:16:35 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:35 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.1.0.compute-2.ydhhov
Dec 11 09:16:35 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.1.0.compute-2.ydhhov
Dec 11 09:16:35 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.ydhhov", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Dec 11 09:16:35 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.ydhhov", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Dec 11 09:16:35 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.ydhhov", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Dec 11 09:16:35 compute-0 ceph-mgr[74715]: [cephadm INFO root] Ensuring nfs.cephfs.1 is in the ganesha grace table
Dec 11 09:16:35 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.1 is in the ganesha grace table
Dec 11 09:16:35 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Dec 11 09:16:35 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Dec 11 09:16:35 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Dec 11 09:16:35 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 11 09:16:35 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:16:36 compute-0 ceph-mon[74426]: pgmap v35: 198 pgs: 198 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 79 KiB/s rd, 8.2 KiB/s wr, 166 op/s
Dec 11 09:16:36 compute-0 ceph-mon[74426]: mds.? [v2:192.168.122.100:6806/2227004311,v1:192.168.122.100:6807/2227004311] up:standby
Dec 11 09:16:36 compute-0 ceph-mon[74426]: fsmap cephfs:1 {0=cephfs.compute-2.abebdg=up:active} 2 up:standby
Dec 11 09:16:36 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:36 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:36 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:36 compute-0 ceph-mon[74426]: Creating key for client.nfs.cephfs.1.0.compute-2.ydhhov
Dec 11 09:16:36 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.ydhhov", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Dec 11 09:16:36 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.ydhhov", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Dec 11 09:16:36 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Dec 11 09:16:36 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Dec 11 09:16:36 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:16:36 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v36: 198 pgs: 198 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 7.0 KiB/s wr, 142 op/s
Dec 11 09:16:37 compute-0 ceph-mgr[74715]: [progress INFO root] Writing back 16 completed events
Dec 11 09:16:37 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 11 09:16:37 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:37 compute-0 ceph-mgr[74715]: [progress INFO root] Completed event 680eafcc-a7e1-4983-b74e-6fe53d1423de (Global Recovery Event) in 10 seconds
Dec 11 09:16:37 compute-0 ceph-mon[74426]: Ensuring nfs.cephfs.1 is in the ganesha grace table
Dec 11 09:16:37 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:38 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).mds e9 new map
Dec 11 09:16:38 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).mds e9 print_map
                                           e9
                                           btime 2025-12-11T09:16:38:106806+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        7
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-11T09:15:57.915143+0000
                                           modified        2025-12-11T09:16:34.169018+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24187}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24187 members: 24187
                                           [mds.cephfs.compute-2.abebdg{0:24187} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/1341793648,v1:192.168.122.102:6805/1341793648] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.ejykhm{-1:14481} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/2227004311,v1:192.168.122.100:6807/2227004311] compat {c=[1],r=[1],i=[1fff]}]
                                           [mds.cephfs.compute-1.hifxsh{-1:24191} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/3436560509,v1:192.168.122.101:6805/3436560509] compat {c=[1],r=[1],i=[1fff]}]
Dec 11 09:16:38 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/3436560509,v1:192.168.122.101:6805/3436560509] up:standby
Dec 11 09:16:38 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.abebdg=up:active} 2 up:standby
Dec 11 09:16:38 compute-0 ceph-mon[74426]: pgmap v36: 198 pgs: 198 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 7.0 KiB/s wr, 142 op/s
Dec 11 09:16:38 compute-0 ceph-mon[74426]: mds.? [v2:192.168.122.101:6804/3436560509,v1:192.168.122.101:6805/3436560509] up:standby
Dec 11 09:16:38 compute-0 ceph-mon[74426]: fsmap cephfs:1 {0=cephfs.compute-2.abebdg=up:active} 2 up:standby
Dec 11 09:16:38 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Dec 11 09:16:38 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Dec 11 09:16:38 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Dec 11 09:16:38 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Dec 11 09:16:38 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Dec 11 09:16:38 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.1.0.compute-2.ydhhov-rgw
Dec 11 09:16:38 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.1.0.compute-2.ydhhov-rgw
Dec 11 09:16:38 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.ydhhov-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Dec 11 09:16:38 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.ydhhov-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 11 09:16:38 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.ydhhov-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 11 09:16:38 compute-0 ceph-mgr[74715]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.1.0.compute-2.ydhhov's ganesha conf is defaulting to empty
Dec 11 09:16:38 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.1.0.compute-2.ydhhov's ganesha conf is defaulting to empty
Dec 11 09:16:38 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 11 09:16:38 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:16:38 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.1.0.compute-2.ydhhov on compute-2
Dec 11 09:16:38 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.1.0.compute-2.ydhhov on compute-2
Dec 11 09:16:38 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v37: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 187 KiB/s rd, 7.1 KiB/s wr, 345 op/s
Dec 11 09:16:39 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e56 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:16:39 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Dec 11 09:16:39 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Dec 11 09:16:39 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.ydhhov-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 11 09:16:39 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.ydhhov-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 11 09:16:39 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:16:40 compute-0 ceph-mon[74426]: Rados config object exists: conf-nfs.cephfs
Dec 11 09:16:40 compute-0 ceph-mon[74426]: Creating key for client.nfs.cephfs.1.0.compute-2.ydhhov-rgw
Dec 11 09:16:40 compute-0 ceph-mon[74426]: Bind address in nfs.cephfs.1.0.compute-2.ydhhov's ganesha conf is defaulting to empty
Dec 11 09:16:40 compute-0 ceph-mon[74426]: Deploying daemon nfs.cephfs.1.0.compute-2.ydhhov on compute-2
Dec 11 09:16:40 compute-0 ceph-mon[74426]: pgmap v37: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 187 KiB/s rd, 7.1 KiB/s wr, 345 op/s
Dec 11 09:16:40 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 11 09:16:40 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v38: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 151 KiB/s rd, 5.8 KiB/s wr, 280 op/s
Dec 11 09:16:40 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:40 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 11 09:16:40 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:40 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 11 09:16:40 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:40 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.2.0.compute-0.iryjby
Dec 11 09:16:40 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.2.0.compute-0.iryjby
Dec 11 09:16:40 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.iryjby", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Dec 11 09:16:40 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.iryjby", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Dec 11 09:16:40 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.iryjby", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Dec 11 09:16:40 compute-0 ceph-mgr[74715]: [cephadm INFO root] Ensuring nfs.cephfs.2 is in the ganesha grace table
Dec 11 09:16:40 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.2 is in the ganesha grace table
Dec 11 09:16:40 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Dec 11 09:16:40 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Dec 11 09:16:40 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Dec 11 09:16:41 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 11 09:16:41 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:16:41 compute-0 ceph-mon[74426]: pgmap v38: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 151 KiB/s rd, 5.8 KiB/s wr, 280 op/s
Dec 11 09:16:41 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:41 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:41 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:41 compute-0 ceph-mon[74426]: Creating key for client.nfs.cephfs.2.0.compute-0.iryjby
Dec 11 09:16:41 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.iryjby", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Dec 11 09:16:41 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.iryjby", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Dec 11 09:16:41 compute-0 ceph-mon[74426]: Ensuring nfs.cephfs.2 is in the ganesha grace table
Dec 11 09:16:41 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Dec 11 09:16:41 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Dec 11 09:16:41 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:16:42 compute-0 ceph-mgr[74715]: [progress INFO root] Writing back 17 completed events
Dec 11 09:16:42 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 11 09:16:42 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:42 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v39: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 138 KiB/s rd, 5.3 KiB/s wr, 255 op/s
Dec 11 09:16:43 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:44 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e56 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:16:44 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Dec 11 09:16:44 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Dec 11 09:16:44 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Dec 11 09:16:44 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Dec 11 09:16:44 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Dec 11 09:16:44 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.2.0.compute-0.iryjby-rgw
Dec 11 09:16:44 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.2.0.compute-0.iryjby-rgw
Dec 11 09:16:44 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.iryjby-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Dec 11 09:16:44 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.iryjby-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 11 09:16:44 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.iryjby-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 11 09:16:44 compute-0 ceph-mgr[74715]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.2.0.compute-0.iryjby's ganesha conf is defaulting to empty
Dec 11 09:16:44 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.2.0.compute-0.iryjby's ganesha conf is defaulting to empty
Dec 11 09:16:44 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 11 09:16:44 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:16:44 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.2.0.compute-0.iryjby on compute-0
Dec 11 09:16:44 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.2.0.compute-0.iryjby on compute-0
Dec 11 09:16:44 compute-0 ceph-mon[74426]: pgmap v39: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 138 KiB/s rd, 5.3 KiB/s wr, 255 op/s
Dec 11 09:16:44 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Dec 11 09:16:44 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Dec 11 09:16:44 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.iryjby-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 11 09:16:44 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.iryjby-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 11 09:16:44 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:16:44 compute-0 sudo[94496]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fyyfvgfgqzodrnrnthmkhwkxuhkvkjvx ; /usr/bin/python3'
Dec 11 09:16:44 compute-0 sudo[94450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:16:44 compute-0 sudo[94496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:16:44 compute-0 sudo[94450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:44 compute-0 sudo[94450]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:44 compute-0 sudo[94501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:16:44 compute-0 sudo[94501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:44 compute-0 python3[94499]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps --daemon-type rgw --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:16:44 compute-0 podman[94526]: 2025-12-11 09:16:44.570234613 +0000 UTC m=+0.060450333 container create 0b2e38f9cda691029c965d321b4246ea972c09c45b917e61ca9b4c7964aa234f (image=quay.io/ceph/ceph:v19, name=blissful_joliot, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:16:44 compute-0 systemd[1]: Started libpod-conmon-0b2e38f9cda691029c965d321b4246ea972c09c45b917e61ca9b4c7964aa234f.scope.
Dec 11 09:16:44 compute-0 podman[94526]: 2025-12-11 09:16:44.539653966 +0000 UTC m=+0.029869716 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:16:44 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:16:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb3c3bccc6b9b0294fbe5ad3911a619ed2d70e84fbea80e66c2e0042c1d7a341/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:16:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb3c3bccc6b9b0294fbe5ad3911a619ed2d70e84fbea80e66c2e0042c1d7a341/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:16:44 compute-0 podman[94526]: 2025-12-11 09:16:44.656522054 +0000 UTC m=+0.146737794 container init 0b2e38f9cda691029c965d321b4246ea972c09c45b917e61ca9b4c7964aa234f (image=quay.io/ceph/ceph:v19, name=blissful_joliot, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:16:44 compute-0 podman[94526]: 2025-12-11 09:16:44.664578087 +0000 UTC m=+0.154793807 container start 0b2e38f9cda691029c965d321b4246ea972c09c45b917e61ca9b4c7964aa234f (image=quay.io/ceph/ceph:v19, name=blissful_joliot, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:16:44 compute-0 podman[94526]: 2025-12-11 09:16:44.668707356 +0000 UTC m=+0.158923086 container attach 0b2e38f9cda691029c965d321b4246ea972c09c45b917e61ca9b4c7964aa234f (image=quay.io/ceph/ceph:v19, name=blissful_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 11 09:16:44 compute-0 blissful_joliot[94543]: ERROR: invalid flag --daemon-type
Dec 11 09:16:44 compute-0 systemd[1]: libpod-0b2e38f9cda691029c965d321b4246ea972c09c45b917e61ca9b4c7964aa234f.scope: Deactivated successfully.
Dec 11 09:16:44 compute-0 podman[94526]: 2025-12-11 09:16:44.732148342 +0000 UTC m=+0.222364092 container died 0b2e38f9cda691029c965d321b4246ea972c09c45b917e61ca9b4c7964aa234f (image=quay.io/ceph/ceph:v19, name=blissful_joliot, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:16:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb3c3bccc6b9b0294fbe5ad3911a619ed2d70e84fbea80e66c2e0042c1d7a341-merged.mount: Deactivated successfully.
Dec 11 09:16:44 compute-0 podman[94526]: 2025-12-11 09:16:44.777415489 +0000 UTC m=+0.267631209 container remove 0b2e38f9cda691029c965d321b4246ea972c09c45b917e61ca9b4c7964aa234f (image=quay.io/ceph/ceph:v19, name=blissful_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:16:44 compute-0 systemd[1]: libpod-conmon-0b2e38f9cda691029c965d321b4246ea972c09c45b917e61ca9b4c7964aa234f.scope: Deactivated successfully.
Dec 11 09:16:44 compute-0 sudo[94496]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:44 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v40: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 126 KiB/s rd, 5.5 KiB/s wr, 232 op/s
Dec 11 09:16:44 compute-0 podman[94610]: 2025-12-11 09:16:44.982089936 +0000 UTC m=+0.163483108 container create 7cac07aba832d16b15eddccf5f321b766fff7c50c843ccfd2fda1bd3b29f2508 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_hofstadter, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Dec 11 09:16:45 compute-0 systemd[1]: Started libpod-conmon-7cac07aba832d16b15eddccf5f321b766fff7c50c843ccfd2fda1bd3b29f2508.scope.
Dec 11 09:16:45 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:16:45 compute-0 podman[94610]: 2025-12-11 09:16:44.961850082 +0000 UTC m=+0.143243284 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:16:45 compute-0 podman[94610]: 2025-12-11 09:16:45.064908399 +0000 UTC m=+0.246301601 container init 7cac07aba832d16b15eddccf5f321b766fff7c50c843ccfd2fda1bd3b29f2508 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_hofstadter, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Dec 11 09:16:45 compute-0 podman[94610]: 2025-12-11 09:16:45.071156914 +0000 UTC m=+0.252550086 container start 7cac07aba832d16b15eddccf5f321b766fff7c50c843ccfd2fda1bd3b29f2508 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_hofstadter, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec 11 09:16:45 compute-0 podman[94610]: 2025-12-11 09:16:45.074803739 +0000 UTC m=+0.256196931 container attach 7cac07aba832d16b15eddccf5f321b766fff7c50c843ccfd2fda1bd3b29f2508 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:16:45 compute-0 nice_hofstadter[94627]: 167 167
Dec 11 09:16:45 compute-0 systemd[1]: libpod-7cac07aba832d16b15eddccf5f321b766fff7c50c843ccfd2fda1bd3b29f2508.scope: Deactivated successfully.
Dec 11 09:16:45 compute-0 podman[94610]: 2025-12-11 09:16:45.07646241 +0000 UTC m=+0.257855612 container died 7cac07aba832d16b15eddccf5f321b766fff7c50c843ccfd2fda1bd3b29f2508 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_hofstadter, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:16:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-b7dcee538140c0645992579a9cc7292239dd001465579e183d8d195e3ffe2e8b-merged.mount: Deactivated successfully.
Dec 11 09:16:45 compute-0 podman[94610]: 2025-12-11 09:16:45.118233799 +0000 UTC m=+0.299626971 container remove 7cac07aba832d16b15eddccf5f321b766fff7c50c843ccfd2fda1bd3b29f2508 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_hofstadter, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:16:45 compute-0 systemd[1]: libpod-conmon-7cac07aba832d16b15eddccf5f321b766fff7c50c843ccfd2fda1bd3b29f2508.scope: Deactivated successfully.
Dec 11 09:16:45 compute-0 systemd[1]: Reloading.
Dec 11 09:16:45 compute-0 systemd-sysv-generator[94674]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:16:45 compute-0 systemd-rc-local-generator[94671]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:16:45 compute-0 ceph-mon[74426]: Rados config object exists: conf-nfs.cephfs
Dec 11 09:16:45 compute-0 ceph-mon[74426]: Creating key for client.nfs.cephfs.2.0.compute-0.iryjby-rgw
Dec 11 09:16:45 compute-0 ceph-mon[74426]: Bind address in nfs.cephfs.2.0.compute-0.iryjby's ganesha conf is defaulting to empty
Dec 11 09:16:45 compute-0 ceph-mon[74426]: Deploying daemon nfs.cephfs.2.0.compute-0.iryjby on compute-0
Dec 11 09:16:45 compute-0 systemd[1]: Reloading.
Dec 11 09:16:45 compute-0 systemd-sysv-generator[94718]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:16:45 compute-0 systemd-rc-local-generator[94714]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:16:45 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.iryjby for 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060...
Dec 11 09:16:45 compute-0 podman[94767]: 2025-12-11 09:16:45.953434143 +0000 UTC m=+0.048605112 container create b054262215ee01c3adca2deaedf76efed99f7a0a45370b1e29ea68ac174c96fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:16:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83fb39222185cbf52b921995d3fb5bac1aede6e75c34ab89f21cb7e956cf72ee/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec 11 09:16:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83fb39222185cbf52b921995d3fb5bac1aede6e75c34ab89f21cb7e956cf72ee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:16:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83fb39222185cbf52b921995d3fb5bac1aede6e75c34ab89f21cb7e956cf72ee/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec 11 09:16:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83fb39222185cbf52b921995d3fb5bac1aede6e75c34ab89f21cb7e956cf72ee/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.iryjby-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec 11 09:16:46 compute-0 podman[94767]: 2025-12-11 09:16:46.019535623 +0000 UTC m=+0.114706602 container init b054262215ee01c3adca2deaedf76efed99f7a0a45370b1e29ea68ac174c96fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 11 09:16:46 compute-0 podman[94767]: 2025-12-11 09:16:46.025737366 +0000 UTC m=+0.120908335 container start b054262215ee01c3adca2deaedf76efed99f7a0a45370b1e29ea68ac174c96fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:16:46 compute-0 podman[94767]: 2025-12-11 09:16:45.933071075 +0000 UTC m=+0.028242074 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:16:46 compute-0 bash[94767]: b054262215ee01c3adca2deaedf76efed99f7a0a45370b1e29ea68ac174c96fc
Dec 11 09:16:46 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.iryjby for 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060.
Dec 11 09:16:46 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:16:46 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec 11 09:16:46 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:16:46 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec 11 09:16:46 compute-0 sudo[94501]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:46 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 11 09:16:46 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:16:46 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec 11 09:16:46 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:16:46 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec 11 09:16:46 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:16:46 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec 11 09:16:46 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:16:46 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec 11 09:16:46 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:16:46 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec 11 09:16:46 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:16:46 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 11 09:16:46 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:46 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 11 09:16:46 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:46 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 11 09:16:46 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:46 compute-0 ceph-mgr[74715]: [progress INFO root] complete: finished ev 312a98b4-d515-4188-b295-c2554b8b2982 (Updating nfs.cephfs deployment (+3 -> 3))
Dec 11 09:16:46 compute-0 ceph-mgr[74715]: [progress INFO root] Completed event 312a98b4-d515-4188-b295-c2554b8b2982 (Updating nfs.cephfs deployment (+3 -> 3)) in 13 seconds
Dec 11 09:16:46 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 11 09:16:46 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:46 compute-0 ceph-mgr[74715]: [progress INFO root] update: starting ev b599dccd-0ddb-4fbf-81b9-571c51e9910e (Updating ingress.nfs.cephfs deployment (+6 -> 6))
Dec 11 09:16:46 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.nfs.cephfs/monitor_password}] v 0)
Dec 11 09:16:46 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:46 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-1.aifiay on compute-1
Dec 11 09:16:46 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-1.aifiay on compute-1
Dec 11 09:16:46 compute-0 ceph-mon[74426]: pgmap v40: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 126 KiB/s rd, 5.5 KiB/s wr, 232 op/s
Dec 11 09:16:46 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:46 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:46 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:46 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:46 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:46 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v41: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 87 KiB/s rd, 1.5 KiB/s wr, 151 op/s
Dec 11 09:16:47 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:16:47 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Dec 11 09:16:47 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:16:47 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Dec 11 09:16:47 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:16:47 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 11 09:16:47 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:16:47 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 11 09:16:47 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:16:47 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 11 09:16:47 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:16:47 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 11 09:16:47 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:16:47 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 11 09:16:47 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:16:47 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 11 09:16:47 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:16:47 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 11 09:16:47 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:16:47 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 11 09:16:47 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:16:47 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 11 09:16:47 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:16:47 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 11 09:16:47 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:16:47 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000003:nfs.cephfs.2: -2
Dec 11 09:16:47 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:16:47 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 11 09:16:47 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:16:47 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec 11 09:16:47 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:16:47 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec 11 09:16:47 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:16:47 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec 11 09:16:47 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:16:47 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec 11 09:16:47 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:16:47 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec 11 09:16:47 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:16:47 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec 11 09:16:47 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:16:47 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 11 09:16:47 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:16:47 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 11 09:16:47 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:16:47 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 11 09:16:47 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:16:47 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec 11 09:16:47 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:16:47 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 11 09:16:47 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:16:47 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec 11 09:16:47 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:16:47 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec 11 09:16:47 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:16:47 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec 11 09:16:47 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:16:47 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec 11 09:16:47 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:16:47 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec 11 09:16:47 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:16:47 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec 11 09:16:47 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:16:47 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec 11 09:16:47 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:16:47 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec 11 09:16:47 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:16:47 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec 11 09:16:47 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:16:47 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec 11 09:16:47 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:16:47 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec 11 09:16:47 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:16:47 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec 11 09:16:47 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:16:47 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 11 09:16:47 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:16:47 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec 11 09:16:47 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:16:47 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 11 09:16:47 compute-0 ceph-mgr[74715]: [progress INFO root] Writing back 18 completed events
Dec 11 09:16:47 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 11 09:16:47 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:47 compute-0 ceph-mon[74426]: Deploying daemon haproxy.nfs.cephfs.compute-1.aifiay on compute-1
Dec 11 09:16:47 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:48 compute-0 ceph-mon[74426]: pgmap v41: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 87 KiB/s rd, 1.5 KiB/s wr, 151 op/s
Dec 11 09:16:48 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v42: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 91 KiB/s rd, 2.6 KiB/s wr, 155 op/s
Dec 11 09:16:49 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e56 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:16:50 compute-0 ceph-mon[74426]: pgmap v42: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 91 KiB/s rd, 2.6 KiB/s wr, 155 op/s
Dec 11 09:16:50 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v43: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 4.8 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Dec 11 09:16:51 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 11 09:16:51 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:51 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 11 09:16:51 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:51 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec 11 09:16:51 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:51 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-0.qtoxfz on compute-0
Dec 11 09:16:51 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-0.qtoxfz on compute-0
Dec 11 09:16:51 compute-0 sudo[94836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:16:51 compute-0 sudo[94836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:51 compute-0 sudo[94836]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:51 compute-0 sudo[94861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/haproxy:2.3 --timeout 895 _orch deploy --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:16:51 compute-0 sudo[94861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:52 compute-0 ceph-mon[74426]: pgmap v43: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 4.8 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Dec 11 09:16:52 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:52 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:52 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:52 compute-0 ceph-mon[74426]: Deploying daemon haproxy.nfs.cephfs.compute-0.qtoxfz on compute-0
Dec 11 09:16:52 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:16:52 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef7c000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:16:52 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v44: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 4.8 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Dec 11 09:16:54 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e56 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:16:54 compute-0 ceph-mon[74426]: pgmap v44: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 4.8 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Dec 11 09:16:54 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:16:54 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef60000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:16:54 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v45: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 4.8 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Dec 11 09:16:54 compute-0 sudo[95005]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azdckovggbbxradgfwspxfbjhxljqjkj ; /usr/bin/python3'
Dec 11 09:16:54 compute-0 sudo[95005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:16:55 compute-0 python3[95007]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps --daemon-type rgw --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:16:56 compute-0 ceph-mon[74426]: pgmap v45: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 4.8 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Dec 11 09:16:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:16:56 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef60000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:16:56 compute-0 podman[95013]: 2025-12-11 09:16:56.633348992 +0000 UTC m=+1.449578900 container create 58e344480a95be51de695ccfa83c109f2167c77f96a28782e26a48ccb6c5bab7 (image=quay.io/ceph/ceph:v19, name=zealous_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:16:56 compute-0 podman[95013]: 2025-12-11 09:16:56.612842716 +0000 UTC m=+1.429072654 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:16:56 compute-0 podman[94927]: 2025-12-11 09:16:56.654036244 +0000 UTC m=+4.531951548 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Dec 11 09:16:56 compute-0 podman[94927]: 2025-12-11 09:16:56.673381505 +0000 UTC m=+4.551296789 container create 01cf2904d43f1b0b57f7ce01c697bde63c99da32e6b22ab4f8280c0c9e72b003 (image=quay.io/ceph/haproxy:2.3, name=suspicious_shockley)
Dec 11 09:16:56 compute-0 systemd[1]: Started libpod-conmon-58e344480a95be51de695ccfa83c109f2167c77f96a28782e26a48ccb6c5bab7.scope.
Dec 11 09:16:56 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:16:56 compute-0 systemd[1]: Started libpod-conmon-01cf2904d43f1b0b57f7ce01c697bde63c99da32e6b22ab4f8280c0c9e72b003.scope.
Dec 11 09:16:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3e23057cee27feba1861bdd058393e4944e945a2edb541745b617d6c755309e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:16:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3e23057cee27feba1861bdd058393e4944e945a2edb541745b617d6c755309e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:16:56 compute-0 podman[95013]: 2025-12-11 09:16:56.723701197 +0000 UTC m=+1.539931125 container init 58e344480a95be51de695ccfa83c109f2167c77f96a28782e26a48ccb6c5bab7 (image=quay.io/ceph/ceph:v19, name=zealous_lehmann, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:16:56 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:16:56 compute-0 podman[95013]: 2025-12-11 09:16:56.731774697 +0000 UTC m=+1.548004615 container start 58e344480a95be51de695ccfa83c109f2167c77f96a28782e26a48ccb6c5bab7 (image=quay.io/ceph/ceph:v19, name=zealous_lehmann, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:16:56 compute-0 podman[95013]: 2025-12-11 09:16:56.735550755 +0000 UTC m=+1.551780663 container attach 58e344480a95be51de695ccfa83c109f2167c77f96a28782e26a48ccb6c5bab7 (image=quay.io/ceph/ceph:v19, name=zealous_lehmann, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:16:56 compute-0 podman[94927]: 2025-12-11 09:16:56.740673704 +0000 UTC m=+4.618589008 container init 01cf2904d43f1b0b57f7ce01c697bde63c99da32e6b22ab4f8280c0c9e72b003 (image=quay.io/ceph/haproxy:2.3, name=suspicious_shockley)
Dec 11 09:16:56 compute-0 podman[94927]: 2025-12-11 09:16:56.747412113 +0000 UTC m=+4.625327387 container start 01cf2904d43f1b0b57f7ce01c697bde63c99da32e6b22ab4f8280c0c9e72b003 (image=quay.io/ceph/haproxy:2.3, name=suspicious_shockley)
Dec 11 09:16:56 compute-0 podman[94927]: 2025-12-11 09:16:56.751730247 +0000 UTC m=+4.629645551 container attach 01cf2904d43f1b0b57f7ce01c697bde63c99da32e6b22ab4f8280c0c9e72b003 (image=quay.io/ceph/haproxy:2.3, name=suspicious_shockley)
Dec 11 09:16:56 compute-0 suspicious_shockley[95086]: 0 0
Dec 11 09:16:56 compute-0 systemd[1]: libpod-01cf2904d43f1b0b57f7ce01c697bde63c99da32e6b22ab4f8280c0c9e72b003.scope: Deactivated successfully.
Dec 11 09:16:56 compute-0 zealous_lehmann[95082]: ERROR: invalid flag --daemon-type
Dec 11 09:16:56 compute-0 systemd[1]: libpod-58e344480a95be51de695ccfa83c109f2167c77f96a28782e26a48ccb6c5bab7.scope: Deactivated successfully.
Dec 11 09:16:56 compute-0 podman[95013]: 2025-12-11 09:16:56.796382923 +0000 UTC m=+1.612612841 container died 58e344480a95be51de695ccfa83c109f2167c77f96a28782e26a48ccb6c5bab7 (image=quay.io/ceph/ceph:v19, name=zealous_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 11 09:16:56 compute-0 podman[95092]: 2025-12-11 09:16:56.818579682 +0000 UTC m=+0.048191567 container died 01cf2904d43f1b0b57f7ce01c697bde63c99da32e6b22ab4f8280c0c9e72b003 (image=quay.io/ceph/haproxy:2.3, name=suspicious_shockley)
Dec 11 09:16:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-a3e23057cee27feba1861bdd058393e4944e945a2edb541745b617d6c755309e-merged.mount: Deactivated successfully.
Dec 11 09:16:56 compute-0 podman[95013]: 2025-12-11 09:16:56.851435842 +0000 UTC m=+1.667665750 container remove 58e344480a95be51de695ccfa83c109f2167c77f96a28782e26a48ccb6c5bab7 (image=quay.io/ceph/ceph:v19, name=zealous_lehmann, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec 11 09:16:56 compute-0 systemd[1]: libpod-conmon-58e344480a95be51de695ccfa83c109f2167c77f96a28782e26a48ccb6c5bab7.scope: Deactivated successfully.
Dec 11 09:16:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd4ee25a4e6f63c72cd358b8fb02e0508fd631aedddf67cf8c3406971a50da22-merged.mount: Deactivated successfully.
Dec 11 09:16:56 compute-0 sudo[95005]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:56 compute-0 podman[95092]: 2025-12-11 09:16:56.885681545 +0000 UTC m=+0.115293410 container remove 01cf2904d43f1b0b57f7ce01c697bde63c99da32e6b22ab4f8280c0c9e72b003 (image=quay.io/ceph/haproxy:2.3, name=suspicious_shockley)
Dec 11 09:16:56 compute-0 systemd[1]: libpod-conmon-01cf2904d43f1b0b57f7ce01c697bde63c99da32e6b22ab4f8280c0c9e72b003.scope: Deactivated successfully.
Dec 11 09:16:56 compute-0 ceph-mgr[74715]: [balancer INFO root] Optimize plan auto_2025-12-11_09:16:56
Dec 11 09:16:56 compute-0 ceph-mgr[74715]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 11 09:16:56 compute-0 ceph-mgr[74715]: [balancer INFO root] do_upmap
Dec 11 09:16:56 compute-0 ceph-mgr[74715]: [balancer INFO root] pools ['.mgr', 'default.rgw.meta', 'backups', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.control', 'images', 'cephfs.cephfs.data', 'vms', '.nfs', 'default.rgw.log', '.rgw.root']
Dec 11 09:16:56 compute-0 ceph-mgr[74715]: [balancer INFO root] prepared 0/10 upmap changes
Dec 11 09:16:56 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v46: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Dec 11 09:16:56 compute-0 systemd[1]: Reloading.
Dec 11 09:16:57 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] _maybe_adjust
Dec 11 09:16:57 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 11 09:16:57 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 11 09:16:57 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 11 09:16:57 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 11 09:16:57 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 11 09:16:57 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 11 09:16:57 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 11 09:16:57 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 11 09:16:57 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 11 09:16:57 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 11 09:16:57 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 11 09:16:57 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 11 09:16:57 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 11 09:16:57 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 11 09:16:57 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 11 09:16:57 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 1)
Dec 11 09:16:57 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 11 09:16:57 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 1)
Dec 11 09:16:57 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 11 09:16:57 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 1)
Dec 11 09:16:57 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 11 09:16:57 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 11 09:16:57 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 11 09:16:57 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Dec 11 09:16:57 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"} v 0)
Dec 11 09:16:57 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Dec 11 09:16:57 compute-0 systemd-sysv-generator[95166]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:16:57 compute-0 systemd-rc-local-generator[95163]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:16:57 compute-0 ceph-mgr[74715]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 11 09:16:57 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] scanning for idle connections..
Dec 11 09:16:57 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] cleaning up connections: []
Dec 11 09:16:57 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 11 09:16:57 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 11 09:16:57 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 11 09:16:57 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 11 09:16:57 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] scanning for idle connections..
Dec 11 09:16:57 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] cleaning up connections: []
Dec 11 09:16:57 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] scanning for idle connections..
Dec 11 09:16:57 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] cleaning up connections: []
Dec 11 09:16:57 compute-0 ceph-mgr[74715]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 11 09:16:57 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 11 09:16:57 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 11 09:16:57 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 11 09:16:57 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 11 09:16:57 compute-0 systemd[1]: Reloading.
Dec 11 09:16:57 compute-0 systemd-rc-local-generator[95205]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:16:57 compute-0 systemd-sysv-generator[95208]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:16:57 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Dec 11 09:16:57 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Dec 11 09:16:57 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Dec 11 09:16:57 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Dec 11 09:16:57 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Dec 11 09:16:57 compute-0 ceph-mgr[74715]: [progress INFO root] update: starting ev 211b1a04-1da7-40f8-b10f-e46dc3fa5cae (PG autoscaler increasing pool 8 PGs from 1 to 32)
Dec 11 09:16:57 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0)
Dec 11 09:16:57 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Dec 11 09:16:57 compute-0 systemd[1]: Starting Ceph haproxy.nfs.cephfs.compute-0.qtoxfz for 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060...
Dec 11 09:16:57 compute-0 podman[95260]: 2025-12-11 09:16:57.914331718 +0000 UTC m=+0.048698154 container create 67bfd387bd504b410155709fb8f34d841ddb874b3571550320b1a27fbc3dea08 (image=quay.io/ceph/haproxy:2.3, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-haproxy-nfs-cephfs-compute-0-qtoxfz)
Dec 11 09:16:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c854c67de2264e940e72b19357c71a7b449dedd6957b868aea18624aee0e3e7b/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Dec 11 09:16:57 compute-0 podman[95260]: 2025-12-11 09:16:57.972826813 +0000 UTC m=+0.107193239 container init 67bfd387bd504b410155709fb8f34d841ddb874b3571550320b1a27fbc3dea08 (image=quay.io/ceph/haproxy:2.3, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-haproxy-nfs-cephfs-compute-0-qtoxfz)
Dec 11 09:16:57 compute-0 podman[95260]: 2025-12-11 09:16:57.98012914 +0000 UTC m=+0.114495566 container start 67bfd387bd504b410155709fb8f34d841ddb874b3571550320b1a27fbc3dea08 (image=quay.io/ceph/haproxy:2.3, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-haproxy-nfs-cephfs-compute-0-qtoxfz)
Dec 11 09:16:57 compute-0 bash[95260]: 67bfd387bd504b410155709fb8f34d841ddb874b3571550320b1a27fbc3dea08
Dec 11 09:16:57 compute-0 podman[95260]: 2025-12-11 09:16:57.89156021 +0000 UTC m=+0.025926636 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Dec 11 09:16:57 compute-0 systemd[1]: Started Ceph haproxy.nfs.cephfs.compute-0.qtoxfz for 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060.
Dec 11 09:16:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-haproxy-nfs-cephfs-compute-0-qtoxfz[95274]: [NOTICE] 344/091657 (2) : New worker #1 (4) forked
Dec 11 09:16:58 compute-0 sudo[94861]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:58 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 11 09:16:58 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:58 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 11 09:16:58 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:58 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec 11 09:16:58 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:58 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-2.sgybns on compute-2
Dec 11 09:16:58 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-2.sgybns on compute-2
Dec 11 09:16:58 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Dec 11 09:16:58 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Dec 11 09:16:58 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Dec 11 09:16:58 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Dec 11 09:16:58 compute-0 ceph-mgr[74715]: [progress INFO root] update: starting ev a30dd78d-479c-400b-acea-eb02500fb3cf (PG autoscaler increasing pool 9 PGs from 1 to 32)
Dec 11 09:16:58 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0)
Dec 11 09:16:58 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Dec 11 09:16:58 compute-0 ceph-mon[74426]: pgmap v46: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Dec 11 09:16:58 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Dec 11 09:16:58 compute-0 ceph-mon[74426]: osdmap e57: 3 total, 3 up, 3 in
Dec 11 09:16:58 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Dec 11 09:16:58 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:58 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:58 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:58 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:16:58 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef68001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:16:58 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v49: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 0 op/s
Dec 11 09:16:58 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"} v 0)
Dec 11 09:16:58 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 11 09:16:58 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0)
Dec 11 09:16:58 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 11 09:16:59 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:16:59 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:16:59 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef54000fa0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:16:59 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Dec 11 09:16:59 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Dec 11 09:16:59 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Dec 11 09:16:59 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Dec 11 09:16:59 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Dec 11 09:16:59 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Dec 11 09:16:59 compute-0 ceph-mgr[74715]: [progress INFO root] update: starting ev 2b3aa526-b9b5-464f-bc33-38d5472a74aa (PG autoscaler increasing pool 10 PGs from 1 to 32)
Dec 11 09:16:59 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0)
Dec 11 09:16:59 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Dec 11 09:16:59 compute-0 ceph-mon[74426]: Deploying daemon haproxy.nfs.cephfs.compute-2.sgybns on compute-2
Dec 11 09:16:59 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Dec 11 09:16:59 compute-0 ceph-mon[74426]: osdmap e58: 3 total, 3 up, 3 in
Dec 11 09:16:59 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Dec 11 09:16:59 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 11 09:16:59 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 11 09:16:59 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Dec 11 09:16:59 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Dec 11 09:16:59 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Dec 11 09:16:59 compute-0 ceph-mon[74426]: osdmap e59: 3 total, 3 up, 3 in
Dec 11 09:17:00 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Dec 11 09:17:00 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Dec 11 09:17:00 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Dec 11 09:17:00 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Dec 11 09:17:00 compute-0 ceph-mgr[74715]: [progress INFO root] update: starting ev 437b4c6e-7cd7-4476-9187-5ce930e38fbf (PG autoscaler increasing pool 11 PGs from 1 to 32)
Dec 11 09:17:00 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 59 pg[9.0( v 49'12 (0'0,49'12] local-lis/les=48/49 n=6 ec=48/48 lis/c=48/48 les/c/f=49/49/0 sis=59 pruub=12.367469788s) [1] r=0 lpr=59 pi=[48,59)/1 crt=49'12 lcod 49'11 mlcod 49'11 active pruub 174.047653198s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:00 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 59 pg[8.0( v 56'45 (0'0,56'45] local-lis/les=45/46 n=5 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=59 pruub=12.739258766s) [1] r=0 lpr=59 pi=[45,59)/1 crt=56'45 lcod 56'44 mlcod 56'44 active pruub 174.419555664s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:00 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0)
Dec 11 09:17:00 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Dec 11 09:17:00 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 60 pg[9.0( v 49'12 lc 0'0 (0'0,49'12] local-lis/les=48/49 n=0 ec=48/48 lis/c=48/48 les/c/f=49/49/0 sis=59 pruub=12.367469788s) [1] r=0 lpr=59 pi=[48,59)/1 crt=49'12 lcod 49'11 mlcod 0'0 unknown pruub 174.047653198s@ mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:00 compute-0 ceph-osd[82859]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x56366714e240) operator()   moving buffer(0x56366792a3e8 space 0x563667b83390 0x0~1000 clean)
Dec 11 09:17:00 compute-0 ceph-osd[82859]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x56366714e240) operator()   moving buffer(0x5636679068e8 space 0x563667a83ae0 0x0~1000 clean)
Dec 11 09:17:00 compute-0 ceph-osd[82859]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x56366714e240) operator()   moving buffer(0x563667b7fa68 space 0x563667908f80 0x0~1000 clean)
Dec 11 09:17:00 compute-0 ceph-osd[82859]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x56366714e240) operator()   moving buffer(0x563667bb79c8 space 0x563667ace900 0x0~1000 clean)
Dec 11 09:17:00 compute-0 ceph-mon[74426]: pgmap v49: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 0 op/s
Dec 11 09:17:00 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Dec 11 09:17:00 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Dec 11 09:17:00 compute-0 ceph-mon[74426]: osdmap e60: 3 total, 3 up, 3 in
Dec 11 09:17:00 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 60 pg[8.0( v 56'45 lc 0'0 (0'0,56'45] local-lis/les=45/46 n=0 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=59 pruub=12.739258766s) [1] r=0 lpr=59 pi=[45,59)/1 crt=56'45 lcod 56'44 mlcod 0'0 unknown pruub 174.419555664s@ mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:00 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 60 pg[9.1f( v 49'12 lc 0'0 (0'0,49'12] local-lis/les=48/49 n=0 ec=59/48 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=49'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:00 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 60 pg[9.1e( v 49'12 lc 0'0 (0'0,49'12] local-lis/les=48/49 n=0 ec=59/48 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=49'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:00 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 60 pg[9.2( v 49'12 lc 0'0 (0'0,49'12] local-lis/les=48/49 n=1 ec=59/48 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=49'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:00 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 60 pg[9.15( v 49'12 lc 0'0 (0'0,49'12] local-lis/les=48/49 n=0 ec=59/48 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=49'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:00 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 60 pg[9.7( v 49'12 lc 0'0 (0'0,49'12] local-lis/les=48/49 n=0 ec=59/48 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=49'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:00 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 60 pg[9.1( v 49'12 (0'0,49'12] local-lis/les=48/49 n=1 ec=59/48 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=49'12 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:00 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 60 pg[9.18( v 49'12 lc 0'0 (0'0,49'12] local-lis/les=48/49 n=0 ec=59/48 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=49'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:00 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 60 pg[9.5( v 49'12 lc 0'0 (0'0,49'12] local-lis/les=48/49 n=1 ec=59/48 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=49'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:00 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 60 pg[9.b( v 49'12 lc 0'0 (0'0,49'12] local-lis/les=48/49 n=0 ec=59/48 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=49'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:00 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 60 pg[9.f( v 49'12 lc 0'0 (0'0,49'12] local-lis/les=48/49 n=0 ec=59/48 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=49'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:00 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 60 pg[9.d( v 49'12 lc 0'0 (0'0,49'12] local-lis/les=48/49 n=0 ec=59/48 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=49'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:00 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 60 pg[9.1d( v 49'12 lc 0'0 (0'0,49'12] local-lis/les=48/49 n=0 ec=59/48 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=49'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:00 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 60 pg[9.c( v 49'12 lc 0'0 (0'0,49'12] local-lis/les=48/49 n=0 ec=59/48 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=49'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:00 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 60 pg[9.19( v 49'12 lc 0'0 (0'0,49'12] local-lis/les=48/49 n=0 ec=59/48 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=49'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:00 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 60 pg[9.10( v 49'12 lc 0'0 (0'0,49'12] local-lis/les=48/49 n=0 ec=59/48 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=49'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:00 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 60 pg[9.1b( v 49'12 lc 0'0 (0'0,49'12] local-lis/les=48/49 n=0 ec=59/48 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=49'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:00 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 60 pg[9.1c( v 49'12 lc 0'0 (0'0,49'12] local-lis/les=48/49 n=0 ec=59/48 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=49'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:00 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 60 pg[9.13( v 49'12 lc 0'0 (0'0,49'12] local-lis/les=48/49 n=0 ec=59/48 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=49'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:00 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 60 pg[9.17( v 49'12 lc 0'0 (0'0,49'12] local-lis/les=48/49 n=0 ec=59/48 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=49'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:00 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 60 pg[9.1a( v 49'12 lc 0'0 (0'0,49'12] local-lis/les=48/49 n=0 ec=59/48 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=49'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:00 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 60 pg[9.3( v 49'12 lc 0'0 (0'0,49'12] local-lis/les=48/49 n=1 ec=59/48 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=49'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:00 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 60 pg[9.12( v 49'12 lc 0'0 (0'0,49'12] local-lis/les=48/49 n=0 ec=59/48 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=49'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:00 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 60 pg[9.14( v 49'12 lc 0'0 (0'0,49'12] local-lis/les=48/49 n=0 ec=59/48 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=49'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:00 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 60 pg[9.4( v 49'12 lc 0'0 (0'0,49'12] local-lis/les=48/49 n=1 ec=59/48 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=49'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:00 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 60 pg[9.a( v 49'12 lc 0'0 (0'0,49'12] local-lis/les=48/49 n=0 ec=59/48 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=49'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:00 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 60 pg[9.9( v 49'12 lc 0'0 (0'0,49'12] local-lis/les=48/49 n=0 ec=59/48 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=49'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:00 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 60 pg[9.e( v 49'12 lc 0'0 (0'0,49'12] local-lis/les=48/49 n=0 ec=59/48 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=49'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:00 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 60 pg[9.11( v 49'12 lc 0'0 (0'0,49'12] local-lis/les=48/49 n=0 ec=59/48 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=49'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:00 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 60 pg[9.6( v 49'12 lc 0'0 (0'0,49'12] local-lis/les=48/49 n=1 ec=59/48 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=49'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:00 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 60 pg[9.8( v 49'12 lc 0'0 (0'0,49'12] local-lis/les=48/49 n=0 ec=59/48 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=49'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:00 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 60 pg[8.1( v 56'45 (0'0,56'45] local-lis/les=45/46 n=1 ec=59/45 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=56'45 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:00 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 60 pg[9.16( v 49'12 lc 0'0 (0'0,49'12] local-lis/les=48/49 n=0 ec=59/48 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=49'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:00 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 60 pg[8.3( v 56'45 lc 0'0 (0'0,56'45] local-lis/les=45/46 n=1 ec=59/45 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=56'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:00 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 60 pg[8.f( v 56'45 lc 0'0 (0'0,56'45] local-lis/les=45/46 n=0 ec=59/45 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=56'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:00 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 60 pg[8.17( v 56'45 lc 0'0 (0'0,56'45] local-lis/les=45/46 n=0 ec=59/45 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=56'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:00 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 60 pg[8.1a( v 56'45 lc 0'0 (0'0,56'45] local-lis/les=45/46 n=0 ec=59/45 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=56'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:00 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 60 pg[8.6( v 56'45 lc 0'0 (0'0,56'45] local-lis/les=45/46 n=0 ec=59/45 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=56'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:00 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 60 pg[8.19( v 56'45 lc 0'0 (0'0,56'45] local-lis/les=45/46 n=0 ec=59/45 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=56'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:00 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 60 pg[8.8( v 56'45 lc 0'0 (0'0,56'45] local-lis/les=45/46 n=0 ec=59/45 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=56'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:00 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 60 pg[8.14( v 56'45 lc 0'0 (0'0,56'45] local-lis/les=45/46 n=0 ec=59/45 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=56'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:00 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 60 pg[8.18( v 56'45 lc 0'0 (0'0,56'45] local-lis/les=45/46 n=0 ec=59/45 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=56'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:00 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 60 pg[8.4( v 56'45 lc 0'0 (0'0,56'45] local-lis/les=45/46 n=1 ec=59/45 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=56'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:00 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 60 pg[8.2( v 56'45 lc 0'0 (0'0,56'45] local-lis/les=45/46 n=1 ec=59/45 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=56'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:00 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 60 pg[8.a( v 56'45 lc 0'0 (0'0,56'45] local-lis/les=45/46 n=0 ec=59/45 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=56'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:00 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 60 pg[8.15( v 56'45 lc 0'0 (0'0,56'45] local-lis/les=45/46 n=0 ec=59/45 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=56'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:00 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 60 pg[8.13( v 56'45 lc 0'0 (0'0,56'45] local-lis/les=45/46 n=0 ec=59/45 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=56'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:00 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 60 pg[8.b( v 56'45 lc 0'0 (0'0,56'45] local-lis/les=45/46 n=0 ec=59/45 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=56'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:00 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 60 pg[8.9( v 56'45 lc 0'0 (0'0,56'45] local-lis/les=45/46 n=0 ec=59/45 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=56'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:00 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 60 pg[8.16( v 56'45 lc 0'0 (0'0,56'45] local-lis/les=45/46 n=0 ec=59/45 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=56'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:00 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 60 pg[8.12( v 56'45 lc 0'0 (0'0,56'45] local-lis/les=45/46 n=0 ec=59/45 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=56'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:00 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 60 pg[8.c( v 56'45 lc 0'0 (0'0,56'45] local-lis/les=45/46 n=0 ec=59/45 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=56'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:00 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 60 pg[8.d( v 56'45 lc 0'0 (0'0,56'45] local-lis/les=45/46 n=0 ec=59/45 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=56'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:00 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 60 pg[8.5( v 56'45 lc 0'0 (0'0,56'45] local-lis/les=45/46 n=1 ec=59/45 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=56'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:00 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 60 pg[8.10( v 56'45 lc 0'0 (0'0,56'45] local-lis/les=45/46 n=0 ec=59/45 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=56'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:00 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 60 pg[8.7( v 56'45 lc 0'0 (0'0,56'45] local-lis/les=45/46 n=0 ec=59/45 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=56'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:00 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 60 pg[8.e( v 56'45 lc 0'0 (0'0,56'45] local-lis/les=45/46 n=0 ec=59/45 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=56'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:00 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 60 pg[8.1b( v 56'45 lc 0'0 (0'0,56'45] local-lis/les=45/46 n=0 ec=59/45 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=56'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:00 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 60 pg[8.11( v 56'45 lc 0'0 (0'0,56'45] local-lis/les=45/46 n=0 ec=59/45 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=56'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:00 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 60 pg[8.1c( v 56'45 lc 0'0 (0'0,56'45] local-lis/les=45/46 n=0 ec=59/45 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=56'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:00 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 60 pg[8.1d( v 56'45 lc 0'0 (0'0,56'45] local-lis/les=45/46 n=0 ec=59/45 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=56'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:00 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 60 pg[8.1e( v 56'45 lc 0'0 (0'0,56'45] local-lis/les=45/46 n=0 ec=59/45 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=56'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:00 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 60 pg[8.1f( v 56'45 lc 0'0 (0'0,56'45] local-lis/les=45/46 n=0 ec=59/45 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=56'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:00 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:00 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef58000d00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:00 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v52: 260 pgs: 62 unknown, 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 11 09:17:00 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0)
Dec 11 09:17:00 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 11 09:17:00 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0)
Dec 11 09:17:00 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 11 09:17:01 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:01 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef60001a70 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:01 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Dec 11 09:17:01 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Dec 11 09:17:01 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Dec 11 09:17:01 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Dec 11 09:17:01 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Dec 11 09:17:01 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Dec 11 09:17:01 compute-0 ceph-mgr[74715]: [progress INFO root] update: starting ev 77a5c1e7-c5c2-40a7-b5c1-7ab15699f541 (PG autoscaler increasing pool 12 PGs from 1 to 32)
Dec 11 09:17:01 compute-0 ceph-mgr[74715]: [progress INFO root] complete: finished ev 211b1a04-1da7-40f8-b10f-e46dc3fa5cae (PG autoscaler increasing pool 8 PGs from 1 to 32)
Dec 11 09:17:01 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 61 pg[11.0( v 53'48 (0'0,53'48] local-lis/les=52/53 n=8 ec=52/52 lis/c=52/52 les/c/f=53/53/0 sis=61 pruub=15.324840546s) [1] r=0 lpr=61 pi=[52,61)/1 crt=53'48 lcod 53'47 mlcod 53'47 active pruub 178.142501831s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:01 compute-0 ceph-mgr[74715]: [progress INFO root] Completed event 211b1a04-1da7-40f8-b10f-e46dc3fa5cae (PG autoscaler increasing pool 8 PGs from 1 to 32) in 4 seconds
Dec 11 09:17:01 compute-0 ceph-mgr[74715]: [progress INFO root] complete: finished ev a30dd78d-479c-400b-acea-eb02500fb3cf (PG autoscaler increasing pool 9 PGs from 1 to 32)
Dec 11 09:17:01 compute-0 ceph-mgr[74715]: [progress INFO root] Completed event a30dd78d-479c-400b-acea-eb02500fb3cf (PG autoscaler increasing pool 9 PGs from 1 to 32) in 3 seconds
Dec 11 09:17:01 compute-0 ceph-mgr[74715]: [progress INFO root] complete: finished ev 2b3aa526-b9b5-464f-bc33-38d5472a74aa (PG autoscaler increasing pool 10 PGs from 1 to 32)
Dec 11 09:17:01 compute-0 ceph-mgr[74715]: [progress INFO root] Completed event 2b3aa526-b9b5-464f-bc33-38d5472a74aa (PG autoscaler increasing pool 10 PGs from 1 to 32) in 2 seconds
Dec 11 09:17:01 compute-0 ceph-mgr[74715]: [progress INFO root] complete: finished ev 437b4c6e-7cd7-4476-9187-5ce930e38fbf (PG autoscaler increasing pool 11 PGs from 1 to 32)
Dec 11 09:17:01 compute-0 ceph-mgr[74715]: [progress INFO root] Completed event 437b4c6e-7cd7-4476-9187-5ce930e38fbf (PG autoscaler increasing pool 11 PGs from 1 to 32) in 1 seconds
Dec 11 09:17:01 compute-0 ceph-mgr[74715]: [progress INFO root] complete: finished ev 77a5c1e7-c5c2-40a7-b5c1-7ab15699f541 (PG autoscaler increasing pool 12 PGs from 1 to 32)
Dec 11 09:17:01 compute-0 ceph-mgr[74715]: [progress INFO root] Completed event 77a5c1e7-c5c2-40a7-b5c1-7ab15699f541 (PG autoscaler increasing pool 12 PGs from 1 to 32) in 0 seconds
Dec 11 09:17:01 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 61 pg[8.14( v 56'45 (0'0,56'45] local-lis/les=59/61 n=0 ec=59/45 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=56'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:01 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 61 pg[11.0( v 53'48 lc 0'0 (0'0,53'48] local-lis/les=52/53 n=0 ec=52/52 lis/c=52/52 les/c/f=53/53/0 sis=61 pruub=15.324840546s) [1] r=0 lpr=61 pi=[52,61)/1 crt=53'48 lcod 53'47 mlcod 0'0 unknown pruub 178.142501831s@ mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:01 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 61 pg[9.16( v 49'12 (0'0,49'12] local-lis/les=59/61 n=0 ec=59/48 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=49'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:01 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 61 pg[8.15( v 56'45 (0'0,56'45] local-lis/les=59/61 n=0 ec=59/45 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=56'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:01 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 61 pg[9.15( v 49'12 (0'0,49'12] local-lis/les=59/61 n=0 ec=59/48 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=49'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:01 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 61 pg[9.17( v 49'12 (0'0,49'12] local-lis/les=59/61 n=0 ec=59/48 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=49'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:01 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 61 pg[8.16( v 56'45 (0'0,56'45] local-lis/les=59/61 n=0 ec=59/45 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=56'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:01 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 61 pg[9.11( v 49'12 (0'0,49'12] local-lis/les=59/61 n=0 ec=59/48 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=49'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:01 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 61 pg[9.3( v 49'12 (0'0,49'12] local-lis/les=59/61 n=1 ec=59/48 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=49'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:01 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 61 pg[9.10( v 49'12 (0'0,49'12] local-lis/les=59/61 n=0 ec=59/48 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=49'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:01 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 61 pg[8.10( v 56'45 (0'0,56'45] local-lis/les=59/61 n=0 ec=59/45 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=56'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:01 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 61 pg[8.11( v 56'45 (0'0,56'45] local-lis/les=59/61 n=0 ec=59/45 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=56'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:01 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 61 pg[8.2( v 56'45 (0'0,56'45] local-lis/les=59/61 n=1 ec=59/45 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=56'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:01 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 61 pg[8.3( v 56'45 (0'0,56'45] local-lis/les=59/61 n=1 ec=59/45 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=56'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:01 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 61 pg[9.2( v 49'12 (0'0,49'12] local-lis/les=59/61 n=1 ec=59/48 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=49'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:01 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 61 pg[9.e( v 49'12 (0'0,49'12] local-lis/les=59/61 n=0 ec=59/48 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=49'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:01 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 61 pg[9.14( v 49'12 (0'0,49'12] local-lis/les=59/61 n=0 ec=59/48 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=49'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:01 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 61 pg[9.9( v 49'12 (0'0,49'12] local-lis/les=59/61 n=0 ec=59/48 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=49'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:01 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 61 pg[8.8( v 56'45 (0'0,56'45] local-lis/les=59/61 n=0 ec=59/45 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=56'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:01 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 61 pg[8.9( v 56'45 (0'0,56'45] local-lis/les=59/61 n=0 ec=59/45 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=56'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:01 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 61 pg[8.f( v 56'45 (0'0,56'45] local-lis/les=59/61 n=0 ec=59/45 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=56'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:01 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 61 pg[8.a( v 56'45 (0'0,56'45] local-lis/les=59/61 n=0 ec=59/45 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=56'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:01 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 61 pg[8.17( v 56'45 (0'0,56'45] local-lis/les=59/61 n=0 ec=59/45 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=56'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:01 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 61 pg[9.b( v 49'12 (0'0,49'12] local-lis/les=59/61 n=0 ec=59/48 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=49'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:01 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 61 pg[8.d( v 56'45 (0'0,56'45] local-lis/les=59/61 n=0 ec=59/45 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=56'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:01 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 61 pg[9.f( v 49'12 (0'0,49'12] local-lis/les=59/61 n=0 ec=59/48 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=49'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:01 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 61 pg[9.8( v 49'12 (0'0,49'12] local-lis/les=59/61 n=0 ec=59/48 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=49'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:01 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 61 pg[9.d( v 49'12 (0'0,49'12] local-lis/les=59/61 n=0 ec=59/48 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=49'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:01 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 61 pg[9.c( v 49'12 (0'0,49'12] local-lis/les=59/61 n=0 ec=59/48 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=49'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:01 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 61 pg[8.c( v 56'45 (0'0,56'45] local-lis/les=59/61 n=0 ec=59/45 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=56'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:01 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 61 pg[8.e( v 56'45 (0'0,56'45] local-lis/les=59/61 n=0 ec=59/45 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=56'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:01 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 61 pg[9.a( v 49'12 (0'0,49'12] local-lis/les=59/61 n=0 ec=59/48 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=49'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:01 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 61 pg[9.1( v 49'12 (0'0,49'12] local-lis/les=59/61 n=1 ec=59/48 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=49'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:01 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 61 pg[8.1( v 56'45 (0'0,56'45] local-lis/les=59/61 n=1 ec=59/45 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=56'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:01 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 61 pg[8.0( v 56'45 (0'0,56'45] local-lis/les=59/61 n=0 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=56'45 lcod 56'44 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:01 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 61 pg[9.0( v 49'12 (0'0,49'12] local-lis/les=59/61 n=0 ec=48/48 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=49'12 lcod 49'11 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:01 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 61 pg[8.7( v 56'45 (0'0,56'45] local-lis/les=59/61 n=0 ec=59/45 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=56'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:01 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 61 pg[9.7( v 49'12 (0'0,49'12] local-lis/les=59/61 n=0 ec=59/48 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=49'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:01 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 61 pg[9.4( v 49'12 (0'0,49'12] local-lis/les=59/61 n=1 ec=59/48 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=49'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:01 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 61 pg[9.6( v 49'12 (0'0,49'12] local-lis/les=59/61 n=1 ec=59/48 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=49'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:01 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 61 pg[8.5( v 56'45 (0'0,56'45] local-lis/les=59/61 n=1 ec=59/45 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=56'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:01 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 61 pg[8.b( v 56'45 (0'0,56'45] local-lis/les=59/61 n=0 ec=59/45 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=56'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:01 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 61 pg[9.5( v 49'12 (0'0,49'12] local-lis/les=59/61 n=1 ec=59/48 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=49'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:01 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 61 pg[8.4( v 56'45 (0'0,56'45] local-lis/les=59/61 n=1 ec=59/45 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=56'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:01 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 61 pg[9.1a( v 49'12 (0'0,49'12] local-lis/les=59/61 n=0 ec=59/48 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=49'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:01 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 61 pg[8.6( v 56'45 (0'0,56'45] local-lis/les=59/61 n=0 ec=59/45 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=56'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:01 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 61 pg[9.1b( v 49'12 (0'0,49'12] local-lis/les=59/61 n=0 ec=59/48 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=49'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:01 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 61 pg[8.1a( v 56'45 (0'0,56'45] local-lis/les=59/61 n=0 ec=59/45 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=56'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:01 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 61 pg[9.18( v 49'12 (0'0,49'12] local-lis/les=59/61 n=0 ec=59/48 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=49'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:01 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 61 pg[8.19( v 56'45 (0'0,56'45] local-lis/les=59/61 n=0 ec=59/45 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=56'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:01 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 61 pg[9.19( v 49'12 (0'0,49'12] local-lis/les=59/61 n=0 ec=59/48 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=49'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:01 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 61 pg[8.18( v 56'45 (0'0,56'45] local-lis/les=59/61 n=0 ec=59/45 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=56'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:01 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 61 pg[9.1e( v 49'12 (0'0,49'12] local-lis/les=59/61 n=0 ec=59/48 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=49'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:01 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 61 pg[8.1f( v 56'45 (0'0,56'45] local-lis/les=59/61 n=0 ec=59/45 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=56'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:01 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 61 pg[8.1b( v 56'45 (0'0,56'45] local-lis/les=59/61 n=0 ec=59/45 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=56'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:01 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 61 pg[9.1f( v 49'12 (0'0,49'12] local-lis/les=59/61 n=0 ec=59/48 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=49'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:01 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 61 pg[8.1e( v 56'45 (0'0,56'45] local-lis/les=59/61 n=0 ec=59/45 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=56'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:01 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 61 pg[9.1c( v 49'12 (0'0,49'12] local-lis/les=59/61 n=0 ec=59/48 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=49'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:01 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 61 pg[8.1d( v 56'45 (0'0,56'45] local-lis/les=59/61 n=0 ec=59/45 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=56'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:01 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 61 pg[9.1d( v 49'12 (0'0,49'12] local-lis/les=59/61 n=0 ec=59/48 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=49'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:01 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 61 pg[8.13( v 56'45 (0'0,56'45] local-lis/les=59/61 n=0 ec=59/45 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=56'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:01 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 61 pg[9.12( v 49'12 (0'0,49'12] local-lis/les=59/61 n=0 ec=59/48 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=49'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:01 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 61 pg[8.1c( v 56'45 (0'0,56'45] local-lis/les=59/61 n=0 ec=59/45 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=56'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:01 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 61 pg[9.13( v 49'12 (0'0,49'12] local-lis/les=59/61 n=0 ec=59/48 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=49'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:01 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 61 pg[8.12( v 56'45 (0'0,56'45] local-lis/les=59/61 n=0 ec=59/45 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=56'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:01 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Dec 11 09:17:01 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 11 09:17:01 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 11 09:17:02 compute-0 ceph-mgr[74715]: [progress INFO root] Writing back 23 completed events
Dec 11 09:17:02 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 11 09:17:02 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:02 compute-0 ceph-mgr[74715]: [progress WARNING root] Starting Global Recovery Event,124 pgs not in active + clean state
Dec 11 09:17:02 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:02 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef68001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:02 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Dec 11 09:17:02 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Dec 11 09:17:02 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Dec 11 09:17:02 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Dec 11 09:17:02 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Dec 11 09:17:02 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 62 pg[11.14( v 53'48 lc 0'0 (0'0,53'48] local-lis/les=52/53 n=0 ec=61/52 lis/c=52/52 les/c/f=53/53/0 sis=61) [1] r=0 lpr=61 pi=[52,61)/1 crt=53'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:02 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 62 pg[11.17( v 53'48 lc 0'0 (0'0,53'48] local-lis/les=52/53 n=0 ec=61/52 lis/c=52/52 les/c/f=53/53/0 sis=61) [1] r=0 lpr=61 pi=[52,61)/1 crt=53'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:02 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 62 pg[11.16( v 53'48 lc 0'0 (0'0,53'48] local-lis/les=52/53 n=0 ec=61/52 lis/c=52/52 les/c/f=53/53/0 sis=61) [1] r=0 lpr=61 pi=[52,61)/1 crt=53'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:02 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 62 pg[11.13( v 53'48 lc 0'0 (0'0,53'48] local-lis/les=52/53 n=0 ec=61/52 lis/c=52/52 les/c/f=53/53/0 sis=61) [1] r=0 lpr=61 pi=[52,61)/1 crt=53'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:02 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 62 pg[11.15( v 53'48 lc 0'0 (0'0,53'48] local-lis/les=52/53 n=0 ec=61/52 lis/c=52/52 les/c/f=53/53/0 sis=61) [1] r=0 lpr=61 pi=[52,61)/1 crt=53'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:02 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 62 pg[11.12( v 53'48 lc 0'0 (0'0,53'48] local-lis/les=52/53 n=0 ec=61/52 lis/c=52/52 les/c/f=53/53/0 sis=61) [1] r=0 lpr=61 pi=[52,61)/1 crt=53'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:02 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 62 pg[11.1( v 53'48 (0'0,53'48] local-lis/les=52/53 n=1 ec=61/52 lis/c=52/52 les/c/f=53/53/0 sis=61) [1] r=0 lpr=61 pi=[52,61)/1 crt=53'48 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:02 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 62 pg[11.c( v 53'48 lc 0'0 (0'0,53'48] local-lis/les=52/53 n=0 ec=61/52 lis/c=52/52 les/c/f=53/53/0 sis=61) [1] r=0 lpr=61 pi=[52,61)/1 crt=53'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:02 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 62 pg[11.b( v 53'48 lc 0'0 (0'0,53'48] local-lis/les=52/53 n=0 ec=61/52 lis/c=52/52 les/c/f=53/53/0 sis=61) [1] r=0 lpr=61 pi=[52,61)/1 crt=53'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:02 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 62 pg[11.9( v 53'48 lc 0'0 (0'0,53'48] local-lis/les=52/53 n=0 ec=61/52 lis/c=52/52 les/c/f=53/53/0 sis=61) [1] r=0 lpr=61 pi=[52,61)/1 crt=53'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:02 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 62 pg[11.a( v 53'48 lc 0'0 (0'0,53'48] local-lis/les=52/53 n=0 ec=61/52 lis/c=52/52 les/c/f=53/53/0 sis=61) [1] r=0 lpr=61 pi=[52,61)/1 crt=53'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:02 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 62 pg[11.d( v 53'48 lc 0'0 (0'0,53'48] local-lis/les=52/53 n=0 ec=61/52 lis/c=52/52 les/c/f=53/53/0 sis=61) [1] r=0 lpr=61 pi=[52,61)/1 crt=53'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:02 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 62 pg[11.e( v 53'48 lc 0'0 (0'0,53'48] local-lis/les=52/53 n=0 ec=61/52 lis/c=52/52 les/c/f=53/53/0 sis=61) [1] r=0 lpr=61 pi=[52,61)/1 crt=53'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:02 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 62 pg[11.f( v 53'48 lc 0'0 (0'0,53'48] local-lis/les=52/53 n=0 ec=61/52 lis/c=52/52 les/c/f=53/53/0 sis=61) [1] r=0 lpr=61 pi=[52,61)/1 crt=53'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:02 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 62 pg[11.8( v 53'48 lc 0'0 (0'0,53'48] local-lis/les=52/53 n=1 ec=61/52 lis/c=52/52 les/c/f=53/53/0 sis=61) [1] r=0 lpr=61 pi=[52,61)/1 crt=53'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:02 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 62 pg[11.2( v 53'48 lc 0'0 (0'0,53'48] local-lis/les=52/53 n=1 ec=61/52 lis/c=52/52 les/c/f=53/53/0 sis=61) [1] r=0 lpr=61 pi=[52,61)/1 crt=53'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:02 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 62 pg[11.3( v 53'48 lc 0'0 (0'0,53'48] local-lis/les=52/53 n=1 ec=61/52 lis/c=52/52 les/c/f=53/53/0 sis=61) [1] r=0 lpr=61 pi=[52,61)/1 crt=53'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:02 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 62 pg[11.4( v 53'48 lc 0'0 (0'0,53'48] local-lis/les=52/53 n=1 ec=61/52 lis/c=52/52 les/c/f=53/53/0 sis=61) [1] r=0 lpr=61 pi=[52,61)/1 crt=53'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:02 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 62 pg[11.5( v 53'48 lc 0'0 (0'0,53'48] local-lis/les=52/53 n=1 ec=61/52 lis/c=52/52 les/c/f=53/53/0 sis=61) [1] r=0 lpr=61 pi=[52,61)/1 crt=53'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:02 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 62 pg[11.6( v 53'48 lc 0'0 (0'0,53'48] local-lis/les=52/53 n=1 ec=61/52 lis/c=52/52 les/c/f=53/53/0 sis=61) [1] r=0 lpr=61 pi=[52,61)/1 crt=53'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:02 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 62 pg[11.7( v 53'48 lc 0'0 (0'0,53'48] local-lis/les=52/53 n=1 ec=61/52 lis/c=52/52 les/c/f=53/53/0 sis=61) [1] r=0 lpr=61 pi=[52,61)/1 crt=53'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:02 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 62 pg[11.18( v 53'48 lc 0'0 (0'0,53'48] local-lis/les=52/53 n=0 ec=61/52 lis/c=52/52 les/c/f=53/53/0 sis=61) [1] r=0 lpr=61 pi=[52,61)/1 crt=53'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:02 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 62 pg[11.19( v 53'48 lc 0'0 (0'0,53'48] local-lis/les=52/53 n=0 ec=61/52 lis/c=52/52 les/c/f=53/53/0 sis=61) [1] r=0 lpr=61 pi=[52,61)/1 crt=53'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:02 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 62 pg[11.1a( v 53'48 lc 0'0 (0'0,53'48] local-lis/les=52/53 n=0 ec=61/52 lis/c=52/52 les/c/f=53/53/0 sis=61) [1] r=0 lpr=61 pi=[52,61)/1 crt=53'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:02 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 62 pg[11.1b( v 53'48 lc 0'0 (0'0,53'48] local-lis/les=52/53 n=0 ec=61/52 lis/c=52/52 les/c/f=53/53/0 sis=61) [1] r=0 lpr=61 pi=[52,61)/1 crt=53'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:02 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 62 pg[11.1c( v 53'48 lc 0'0 (0'0,53'48] local-lis/les=52/53 n=0 ec=61/52 lis/c=52/52 les/c/f=53/53/0 sis=61) [1] r=0 lpr=61 pi=[52,61)/1 crt=53'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:02 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 62 pg[11.1d( v 53'48 lc 0'0 (0'0,53'48] local-lis/les=52/53 n=0 ec=61/52 lis/c=52/52 les/c/f=53/53/0 sis=61) [1] r=0 lpr=61 pi=[52,61)/1 crt=53'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:02 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 62 pg[11.1e( v 53'48 lc 0'0 (0'0,53'48] local-lis/les=52/53 n=0 ec=61/52 lis/c=52/52 les/c/f=53/53/0 sis=61) [1] r=0 lpr=61 pi=[52,61)/1 crt=53'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:02 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 62 pg[11.1f( v 53'48 lc 0'0 (0'0,53'48] local-lis/les=52/53 n=0 ec=61/52 lis/c=52/52 les/c/f=53/53/0 sis=61) [1] r=0 lpr=61 pi=[52,61)/1 crt=53'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:02 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 62 pg[11.10( v 53'48 lc 0'0 (0'0,53'48] local-lis/les=52/53 n=0 ec=61/52 lis/c=52/52 les/c/f=53/53/0 sis=61) [1] r=0 lpr=61 pi=[52,61)/1 crt=53'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:02 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 62 pg[11.11( v 53'48 lc 0'0 (0'0,53'48] local-lis/les=52/53 n=0 ec=61/52 lis/c=52/52 les/c/f=53/53/0 sis=61) [1] r=0 lpr=61 pi=[52,61)/1 crt=53'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:02 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 62 pg[11.14( v 53'48 (0'0,53'48] local-lis/les=61/62 n=0 ec=61/52 lis/c=52/52 les/c/f=53/53/0 sis=61) [1] r=0 lpr=61 pi=[52,61)/1 crt=53'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:02 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 62 pg[11.17( v 53'48 (0'0,53'48] local-lis/les=61/62 n=0 ec=61/52 lis/c=52/52 les/c/f=53/53/0 sis=61) [1] r=0 lpr=61 pi=[52,61)/1 crt=53'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:02 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 62 pg[11.13( v 53'48 (0'0,53'48] local-lis/les=61/62 n=0 ec=61/52 lis/c=52/52 les/c/f=53/53/0 sis=61) [1] r=0 lpr=61 pi=[52,61)/1 crt=53'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:02 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 62 pg[11.16( v 53'48 (0'0,53'48] local-lis/les=61/62 n=0 ec=61/52 lis/c=52/52 les/c/f=53/53/0 sis=61) [1] r=0 lpr=61 pi=[52,61)/1 crt=53'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:02 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 62 pg[11.1( v 53'48 (0'0,53'48] local-lis/les=61/62 n=1 ec=61/52 lis/c=52/52 les/c/f=53/53/0 sis=61) [1] r=0 lpr=61 pi=[52,61)/1 crt=53'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:02 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 62 pg[11.15( v 53'48 (0'0,53'48] local-lis/les=61/62 n=0 ec=61/52 lis/c=52/52 les/c/f=53/53/0 sis=61) [1] r=0 lpr=61 pi=[52,61)/1 crt=53'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:02 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 62 pg[11.c( v 53'48 (0'0,53'48] local-lis/les=61/62 n=0 ec=61/52 lis/c=52/52 les/c/f=53/53/0 sis=61) [1] r=0 lpr=61 pi=[52,61)/1 crt=53'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:02 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 62 pg[11.b( v 53'48 (0'0,53'48] local-lis/les=61/62 n=0 ec=61/52 lis/c=52/52 les/c/f=53/53/0 sis=61) [1] r=0 lpr=61 pi=[52,61)/1 crt=53'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:02 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 62 pg[11.0( v 53'48 (0'0,53'48] local-lis/les=61/62 n=0 ec=52/52 lis/c=52/52 les/c/f=53/53/0 sis=61) [1] r=0 lpr=61 pi=[52,61)/1 crt=53'48 lcod 53'47 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:02 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 62 pg[11.12( v 53'48 (0'0,53'48] local-lis/les=61/62 n=0 ec=61/52 lis/c=52/52 les/c/f=53/53/0 sis=61) [1] r=0 lpr=61 pi=[52,61)/1 crt=53'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:02 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 62 pg[11.9( v 53'48 (0'0,53'48] local-lis/les=61/62 n=0 ec=61/52 lis/c=52/52 les/c/f=53/53/0 sis=61) [1] r=0 lpr=61 pi=[52,61)/1 crt=53'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:02 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 62 pg[11.a( v 53'48 (0'0,53'48] local-lis/les=61/62 n=0 ec=61/52 lis/c=52/52 les/c/f=53/53/0 sis=61) [1] r=0 lpr=61 pi=[52,61)/1 crt=53'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:02 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 62 pg[11.d( v 53'48 (0'0,53'48] local-lis/les=61/62 n=0 ec=61/52 lis/c=52/52 les/c/f=53/53/0 sis=61) [1] r=0 lpr=61 pi=[52,61)/1 crt=53'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:02 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 62 pg[11.e( v 53'48 (0'0,53'48] local-lis/les=61/62 n=0 ec=61/52 lis/c=52/52 les/c/f=53/53/0 sis=61) [1] r=0 lpr=61 pi=[52,61)/1 crt=53'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:02 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 62 pg[11.f( v 53'48 (0'0,53'48] local-lis/les=61/62 n=0 ec=61/52 lis/c=52/52 les/c/f=53/53/0 sis=61) [1] r=0 lpr=61 pi=[52,61)/1 crt=53'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:02 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 62 pg[11.3( v 53'48 (0'0,53'48] local-lis/les=61/62 n=1 ec=61/52 lis/c=52/52 les/c/f=53/53/0 sis=61) [1] r=0 lpr=61 pi=[52,61)/1 crt=53'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:02 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 62 pg[11.8( v 53'48 (0'0,53'48] local-lis/les=61/62 n=1 ec=61/52 lis/c=52/52 les/c/f=53/53/0 sis=61) [1] r=0 lpr=61 pi=[52,61)/1 crt=53'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:02 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 62 pg[11.4( v 53'48 (0'0,53'48] local-lis/les=61/62 n=1 ec=61/52 lis/c=52/52 les/c/f=53/53/0 sis=61) [1] r=0 lpr=61 pi=[52,61)/1 crt=53'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:02 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 62 pg[11.6( v 53'48 (0'0,53'48] local-lis/les=61/62 n=1 ec=61/52 lis/c=52/52 les/c/f=53/53/0 sis=61) [1] r=0 lpr=61 pi=[52,61)/1 crt=53'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:02 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 62 pg[11.5( v 53'48 (0'0,53'48] local-lis/les=61/62 n=1 ec=61/52 lis/c=52/52 les/c/f=53/53/0 sis=61) [1] r=0 lpr=61 pi=[52,61)/1 crt=53'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:02 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 62 pg[11.19( v 53'48 (0'0,53'48] local-lis/les=61/62 n=0 ec=61/52 lis/c=52/52 les/c/f=53/53/0 sis=61) [1] r=0 lpr=61 pi=[52,61)/1 crt=53'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:02 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 62 pg[11.7( v 53'48 (0'0,53'48] local-lis/les=61/62 n=1 ec=61/52 lis/c=52/52 les/c/f=53/53/0 sis=61) [1] r=0 lpr=61 pi=[52,61)/1 crt=53'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:02 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 62 pg[11.18( v 53'48 (0'0,53'48] local-lis/les=61/62 n=0 ec=61/52 lis/c=52/52 les/c/f=53/53/0 sis=61) [1] r=0 lpr=61 pi=[52,61)/1 crt=53'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:02 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 62 pg[11.1a( v 53'48 (0'0,53'48] local-lis/les=61/62 n=0 ec=61/52 lis/c=52/52 les/c/f=53/53/0 sis=61) [1] r=0 lpr=61 pi=[52,61)/1 crt=53'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:02 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 62 pg[11.1b( v 53'48 (0'0,53'48] local-lis/les=61/62 n=0 ec=61/52 lis/c=52/52 les/c/f=53/53/0 sis=61) [1] r=0 lpr=61 pi=[52,61)/1 crt=53'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:02 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 62 pg[11.1c( v 53'48 (0'0,53'48] local-lis/les=61/62 n=0 ec=61/52 lis/c=52/52 les/c/f=53/53/0 sis=61) [1] r=0 lpr=61 pi=[52,61)/1 crt=53'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:02 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 62 pg[11.2( v 53'48 (0'0,53'48] local-lis/les=61/62 n=1 ec=61/52 lis/c=52/52 les/c/f=53/53/0 sis=61) [1] r=0 lpr=61 pi=[52,61)/1 crt=53'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:02 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 62 pg[11.1e( v 53'48 (0'0,53'48] local-lis/les=61/62 n=0 ec=61/52 lis/c=52/52 les/c/f=53/53/0 sis=61) [1] r=0 lpr=61 pi=[52,61)/1 crt=53'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:02 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 62 pg[11.1f( v 53'48 (0'0,53'48] local-lis/les=61/62 n=0 ec=61/52 lis/c=52/52 les/c/f=53/53/0 sis=61) [1] r=0 lpr=61 pi=[52,61)/1 crt=53'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:02 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 62 pg[11.10( v 53'48 (0'0,53'48] local-lis/les=61/62 n=0 ec=61/52 lis/c=52/52 les/c/f=53/53/0 sis=61) [1] r=0 lpr=61 pi=[52,61)/1 crt=53'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:02 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 62 pg[11.1d( v 53'48 (0'0,53'48] local-lis/les=61/62 n=0 ec=61/52 lis/c=52/52 les/c/f=53/53/0 sis=61) [1] r=0 lpr=61 pi=[52,61)/1 crt=53'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:02 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 62 pg[11.11( v 53'48 (0'0,53'48] local-lis/les=61/62 n=0 ec=61/52 lis/c=52/52 les/c/f=53/53/0 sis=61) [1] r=0 lpr=61 pi=[52,61)/1 crt=53'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:02 compute-0 ceph-mon[74426]: pgmap v52: 260 pgs: 62 unknown, 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 11 09:17:02 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Dec 11 09:17:02 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Dec 11 09:17:02 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Dec 11 09:17:02 compute-0 ceph-mon[74426]: osdmap e61: 3 total, 3 up, 3 in
Dec 11 09:17:02 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:02 compute-0 ceph-mon[74426]: osdmap e62: 3 total, 3 up, 3 in
Dec 11 09:17:02 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v55: 322 pgs: 124 unknown, 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:17:02 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0)
Dec 11 09:17:02 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 11 09:17:03 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:03 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef54001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:03 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Dec 11 09:17:03 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Dec 11 09:17:03 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 11 09:17:03 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Dec 11 09:17:03 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:03 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 11 09:17:03 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec 11 09:17:03 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Dec 11 09:17:03 compute-0 ceph-mon[74426]: 8.14 scrub starts
Dec 11 09:17:03 compute-0 ceph-mon[74426]: 8.14 scrub ok
Dec 11 09:17:03 compute-0 ceph-mon[74426]: 10.7 deep-scrub starts
Dec 11 09:17:03 compute-0 ceph-mon[74426]: 10.7 deep-scrub ok
Dec 11 09:17:03 compute-0 ceph-mon[74426]: pgmap v55: 322 pgs: 124 unknown, 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:17:03 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 11 09:17:03 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Dec 11 09:17:04 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:04 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec 11 09:17:04 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:17:04 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:04 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.nfs.cephfs/keepalived_password}] v 0)
Dec 11 09:17:04 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:04 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 11 09:17:04 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 11 09:17:04 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 11 09:17:04 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 11 09:17:04 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec 11 09:17:04 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec 11 09:17:04 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-2.qemqoo on compute-2
Dec 11 09:17:04 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-2.qemqoo on compute-2
Dec 11 09:17:04 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:04 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef58001820 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:04 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Dec 11 09:17:04 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Dec 11 09:17:04 compute-0 ceph-mon[74426]: 9.16 scrub starts
Dec 11 09:17:04 compute-0 ceph-mon[74426]: 9.16 scrub ok
Dec 11 09:17:04 compute-0 ceph-mon[74426]: 10.11 deep-scrub starts
Dec 11 09:17:04 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:04 compute-0 ceph-mon[74426]: 10.11 deep-scrub ok
Dec 11 09:17:04 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec 11 09:17:04 compute-0 ceph-mon[74426]: osdmap e63: 3 total, 3 up, 3 in
Dec 11 09:17:04 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:04 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:04 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:04 compute-0 ceph-mon[74426]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 11 09:17:04 compute-0 ceph-mon[74426]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 11 09:17:04 compute-0 ceph-mon[74426]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec 11 09:17:04 compute-0 ceph-mon[74426]: Deploying daemon keepalived.nfs.cephfs.compute-2.qemqoo on compute-2
Dec 11 09:17:04 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v57: 353 pgs: 31 unknown, 322 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:17:04 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:04 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef60001a70 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:05 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Dec 11 09:17:05 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Dec 11 09:17:05 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Dec 11 09:17:05 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:05 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef68002910 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:05 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Dec 11 09:17:05 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Dec 11 09:17:05 compute-0 ceph-mon[74426]: 8.15 scrub starts
Dec 11 09:17:05 compute-0 ceph-mon[74426]: 8.15 scrub ok
Dec 11 09:17:05 compute-0 ceph-mon[74426]: 10.1e scrub starts
Dec 11 09:17:05 compute-0 ceph-mon[74426]: 10.1e scrub ok
Dec 11 09:17:05 compute-0 ceph-mon[74426]: pgmap v57: 353 pgs: 31 unknown, 322 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:17:05 compute-0 ceph-mon[74426]: osdmap e64: 3 total, 3 up, 3 in
Dec 11 09:17:06 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Dec 11 09:17:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:06 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef54001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:06 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Dec 11 09:17:06 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v59: 353 pgs: 31 unknown, 322 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:17:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:06 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef68002910 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:06 compute-0 sudo[95312]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkebfxoepwbnyndjrtsopzctkbobprqh ; /usr/bin/python3'
Dec 11 09:17:06 compute-0 sudo[95312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:17:07 compute-0 python3[95314]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps --daemon-type rgw --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:17:07 compute-0 podman[95315]: 2025-12-11 09:17:07.211092976 +0000 UTC m=+0.050427777 container create 478d08864ce9363620bea62cea7d5b73599673621e7e9030ae50d241a8b72703 (image=quay.io/ceph/ceph:v19, name=tender_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:17:07 compute-0 systemd[1]: Started libpod-conmon-478d08864ce9363620bea62cea7d5b73599673621e7e9030ae50d241a8b72703.scope.
Dec 11 09:17:07 compute-0 podman[95315]: 2025-12-11 09:17:07.190115955 +0000 UTC m=+0.029450786 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:17:07 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:17:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc5336a6a3cc81bb27f0906bac57c2818b37cac3dd22d0d5b0a7e92d48f026d0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:17:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc5336a6a3cc81bb27f0906bac57c2818b37cac3dd22d0d5b0a7e92d48f026d0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:17:07 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:07 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef58001820 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:07 compute-0 podman[95315]: 2025-12-11 09:17:07.340057109 +0000 UTC m=+0.179391910 container init 478d08864ce9363620bea62cea7d5b73599673621e7e9030ae50d241a8b72703 (image=quay.io/ceph/ceph:v19, name=tender_almeida, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:17:07 compute-0 podman[95315]: 2025-12-11 09:17:07.347424687 +0000 UTC m=+0.186759488 container start 478d08864ce9363620bea62cea7d5b73599673621e7e9030ae50d241a8b72703 (image=quay.io/ceph/ceph:v19, name=tender_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec 11 09:17:07 compute-0 podman[95315]: 2025-12-11 09:17:07.351473814 +0000 UTC m=+0.190808615 container attach 478d08864ce9363620bea62cea7d5b73599673621e7e9030ae50d241a8b72703 (image=quay.io/ceph/ceph:v19, name=tender_almeida, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 11 09:17:07 compute-0 tender_almeida[95331]: ERROR: invalid flag --daemon-type
Dec 11 09:17:07 compute-0 systemd[1]: libpod-478d08864ce9363620bea62cea7d5b73599673621e7e9030ae50d241a8b72703.scope: Deactivated successfully.
Dec 11 09:17:07 compute-0 podman[95351]: 2025-12-11 09:17:07.446878065 +0000 UTC m=+0.028712082 container died 478d08864ce9363620bea62cea7d5b73599673621e7e9030ae50d241a8b72703 (image=quay.io/ceph/ceph:v19, name=tender_almeida, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 11 09:17:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc5336a6a3cc81bb27f0906bac57c2818b37cac3dd22d0d5b0a7e92d48f026d0-merged.mount: Deactivated successfully.
Dec 11 09:17:07 compute-0 podman[95351]: 2025-12-11 09:17:07.494105621 +0000 UTC m=+0.075939638 container remove 478d08864ce9363620bea62cea7d5b73599673621e7e9030ae50d241a8b72703 (image=quay.io/ceph/ceph:v19, name=tender_almeida, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:17:07 compute-0 systemd[1]: libpod-conmon-478d08864ce9363620bea62cea7d5b73599673621e7e9030ae50d241a8b72703.scope: Deactivated successfully.
Dec 11 09:17:07 compute-0 sudo[95312]: pam_unix(sudo:session): session closed for user root
Dec 11 09:17:07 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Dec 11 09:17:07 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Dec 11 09:17:08 compute-0 ceph-mon[74426]: 9.17 scrub starts
Dec 11 09:17:08 compute-0 ceph-mon[74426]: 9.17 scrub ok
Dec 11 09:17:08 compute-0 ceph-mon[74426]: 10.1d scrub starts
Dec 11 09:17:08 compute-0 ceph-mon[74426]: 10.1d scrub ok
Dec 11 09:17:08 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 9.11 deep-scrub starts
Dec 11 09:17:08 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 9.11 deep-scrub ok
Dec 11 09:17:08 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:08 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef54001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:08 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v60: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:17:08 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 11 09:17:08 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 11 09:17:08 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 11 09:17:08 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 11 09:17:08 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 11 09:17:08 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 11 09:17:08 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0)
Dec 11 09:17:08 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Dec 11 09:17:08 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 11 09:17:08 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 11 09:17:08 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:08 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef60001a70 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:09 compute-0 ceph-mon[74426]: 8.16 scrub starts
Dec 11 09:17:09 compute-0 ceph-mon[74426]: 8.16 scrub ok
Dec 11 09:17:09 compute-0 ceph-mon[74426]: 10.1c deep-scrub starts
Dec 11 09:17:09 compute-0 ceph-mon[74426]: 10.1c deep-scrub ok
Dec 11 09:17:09 compute-0 ceph-mon[74426]: pgmap v59: 353 pgs: 31 unknown, 322 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:17:09 compute-0 ceph-mon[74426]: 9.15 scrub starts
Dec 11 09:17:09 compute-0 ceph-mon[74426]: 9.15 scrub ok
Dec 11 09:17:09 compute-0 ceph-mon[74426]: 10.1a scrub starts
Dec 11 09:17:09 compute-0 ceph-mon[74426]: 10.1a scrub ok
Dec 11 09:17:09 compute-0 ceph-mon[74426]: 9.11 deep-scrub starts
Dec 11 09:17:09 compute-0 ceph-mon[74426]: 9.11 deep-scrub ok
Dec 11 09:17:09 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 11 09:17:09 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 11 09:17:09 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 11 09:17:09 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Dec 11 09:17:09 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 11 09:17:09 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:17:09 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:09 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef68002910 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:09 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Dec 11 09:17:09 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Dec 11 09:17:09 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Dec 11 09:17:09 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 11 09:17:09 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 11 09:17:09 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 11 09:17:09 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Dec 11 09:17:09 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 11 09:17:09 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Dec 11 09:17:09 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[12.19( empty local-lis/les=0/0 n=0 ec=63/54 lis/c=63/63 les/c/f=64/64/0 sis=65) [1] r=0 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[12.1c( empty local-lis/les=0/0 n=0 ec=63/54 lis/c=63/63 les/c/f=64/64/0 sis=65) [1] r=0 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[12.8( empty local-lis/les=0/0 n=0 ec=63/54 lis/c=63/63 les/c/f=64/64/0 sis=65) [1] r=0 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[12.a( empty local-lis/les=0/0 n=0 ec=63/54 lis/c=63/63 les/c/f=64/64/0 sis=65) [1] r=0 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[12.e( empty local-lis/les=0/0 n=0 ec=63/54 lis/c=63/63 les/c/f=64/64/0 sis=65) [1] r=0 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[12.c( empty local-lis/les=0/0 n=0 ec=63/54 lis/c=63/63 les/c/f=64/64/0 sis=65) [1] r=0 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[12.b( empty local-lis/les=0/0 n=0 ec=63/54 lis/c=63/63 les/c/f=64/64/0 sis=65) [1] r=0 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[12.6( empty local-lis/les=0/0 n=0 ec=63/54 lis/c=63/63 les/c/f=64/64/0 sis=65) [1] r=0 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[12.12( empty local-lis/les=0/0 n=0 ec=63/54 lis/c=63/63 les/c/f=64/64/0 sis=65) [1] r=0 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[12.10( empty local-lis/les=0/0 n=0 ec=63/54 lis/c=63/63 les/c/f=64/64/0 sis=65) [1] r=0 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[11.14( v 62'51 (0'0,62'51] local-lis/les=61/62 n=0 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=8.940612793s) [0] r=-1 lpr=65 pi=[61,65)/1 crt=62'49 lcod 62'50 mlcod 62'50 active pruub 179.833511353s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[11.14( v 62'51 (0'0,62'51] local-lis/les=61/62 n=0 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=8.940532684s) [0] r=-1 lpr=65 pi=[61,65)/1 crt=62'49 lcod 62'50 mlcod 0'0 unknown NOTIFY pruub 179.833511353s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[8.17( v 56'45 (0'0,56'45] local-lis/les=59/61 n=0 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.934049606s) [0] r=-1 lpr=65 pi=[59,65)/1 crt=56'45 lcod 0'0 mlcod 0'0 active pruub 186.827133179s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[11.17( v 53'48 (0'0,53'48] local-lis/les=61/62 n=0 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=8.944162369s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=53'48 lcod 0'0 mlcod 0'0 active pruub 179.837249756s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[8.17( v 56'45 (0'0,56'45] local-lis/les=59/61 n=0 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.934023857s) [0] r=-1 lpr=65 pi=[59,65)/1 crt=56'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.827133179s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[9.15( v 49'12 (0'0,49'12] local-lis/les=59/61 n=0 ec=59/48 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.933504105s) [0] r=-1 lpr=65 pi=[59,65)/1 crt=49'12 lcod 0'0 mlcod 0'0 active pruub 186.826660156s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[11.17( v 53'48 (0'0,53'48] local-lis/les=61/62 n=0 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=8.944142342s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=53'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.837249756s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[9.15( v 49'12 (0'0,49'12] local-lis/les=59/61 n=0 ec=59/48 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.933482170s) [0] r=-1 lpr=65 pi=[59,65)/1 crt=49'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.826660156s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[8.14( v 56'45 (0'0,56'45] local-lis/les=59/61 n=0 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.928055763s) [0] r=-1 lpr=65 pi=[59,65)/1 crt=56'45 lcod 0'0 mlcod 0'0 active pruub 186.821395874s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[8.14( v 56'45 (0'0,56'45] local-lis/les=59/61 n=0 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.928027153s) [0] r=-1 lpr=65 pi=[59,65)/1 crt=56'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.821395874s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[11.16( v 53'48 (0'0,53'48] local-lis/les=61/62 n=0 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=8.943950653s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=53'48 lcod 0'0 mlcod 0'0 active pruub 179.837387085s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[11.16( v 53'48 (0'0,53'48] local-lis/les=61/62 n=0 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=8.943917274s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=53'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.837387085s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[9.17( v 49'12 (0'0,49'12] local-lis/les=59/61 n=0 ec=59/48 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.933000565s) [2] r=-1 lpr=65 pi=[59,65)/1 crt=49'12 lcod 0'0 mlcod 0'0 active pruub 186.826568604s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[9.16( v 49'12 (0'0,49'12] local-lis/les=59/61 n=0 ec=59/48 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.932668686s) [2] r=-1 lpr=65 pi=[59,65)/1 crt=49'12 lcod 0'0 mlcod 0'0 active pruub 186.826248169s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[9.17( v 49'12 (0'0,49'12] local-lis/les=59/61 n=0 ec=59/48 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.932982445s) [2] r=-1 lpr=65 pi=[59,65)/1 crt=49'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.826568604s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[8.16( v 56'45 (0'0,56'45] local-lis/les=59/61 n=2 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.932944298s) [2] r=-1 lpr=65 pi=[59,65)/1 crt=56'45 lcod 0'0 mlcod 0'0 active pruub 186.826583862s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[8.16( v 56'45 (0'0,56'45] local-lis/les=59/61 n=2 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.932926178s) [2] r=-1 lpr=65 pi=[59,65)/1 crt=56'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.826583862s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[11.13( v 53'48 (0'0,53'48] local-lis/les=61/62 n=0 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=8.943531990s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=53'48 lcod 0'0 mlcod 0'0 active pruub 179.837249756s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[11.13( v 53'48 (0'0,53'48] local-lis/les=61/62 n=0 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=8.943511963s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=53'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.837249756s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[8.15( v 56'45 (0'0,56'45] local-lis/les=59/61 n=0 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.932778358s) [2] r=-1 lpr=65 pi=[59,65)/1 crt=56'45 lcod 0'0 mlcod 0'0 active pruub 186.826293945s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[8.10( v 61'48 (0'0,61'48] local-lis/les=59/61 n=0 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.932927132s) [0] r=-1 lpr=65 pi=[59,65)/1 crt=56'45 lcod 61'47 mlcod 61'47 active pruub 186.826766968s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[8.10( v 61'48 (0'0,61'48] local-lis/les=59/61 n=0 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.932901382s) [0] r=-1 lpr=65 pi=[59,65)/1 crt=56'45 lcod 61'47 mlcod 0'0 unknown NOTIFY pruub 186.826766968s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[9.11( v 49'12 (0'0,49'12] local-lis/les=59/61 n=1 ec=59/48 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.932734489s) [0] r=-1 lpr=65 pi=[59,65)/1 crt=49'12 lcod 0'0 mlcod 0'0 active pruub 186.826599121s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[9.11( v 49'12 (0'0,49'12] local-lis/les=59/61 n=1 ec=59/48 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.932692528s) [0] r=-1 lpr=65 pi=[59,65)/1 crt=49'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.826599121s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[11.12( v 53'48 (0'0,53'48] local-lis/les=61/62 n=0 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=8.943425179s) [0] r=-1 lpr=65 pi=[61,65)/1 crt=53'48 lcod 0'0 mlcod 0'0 active pruub 179.837432861s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[11.12( v 53'48 (0'0,53'48] local-lis/les=61/62 n=0 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=8.943414688s) [0] r=-1 lpr=65 pi=[61,65)/1 crt=53'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.837432861s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[8.11( v 56'45 (0'0,56'45] local-lis/les=59/61 n=0 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.932709694s) [2] r=-1 lpr=65 pi=[59,65)/1 crt=56'45 lcod 0'0 mlcod 0'0 active pruub 186.826812744s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[9.16( v 49'12 (0'0,49'12] local-lis/les=59/61 n=0 ec=59/48 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.932257652s) [2] r=-1 lpr=65 pi=[59,65)/1 crt=49'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.826248169s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[8.11( v 56'45 (0'0,56'45] local-lis/les=59/61 n=0 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.932691574s) [2] r=-1 lpr=65 pi=[59,65)/1 crt=56'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.826812744s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[11.1( v 53'48 (0'0,53'48] local-lis/les=61/62 n=1 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=8.943253517s) [0] r=-1 lpr=65 pi=[61,65)/1 crt=53'48 lcod 0'0 mlcod 0'0 active pruub 179.837432861s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[9.10( v 49'12 (0'0,49'12] local-lis/les=59/61 n=0 ec=59/48 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.932616234s) [0] r=-1 lpr=65 pi=[59,65)/1 crt=49'12 lcod 0'0 mlcod 0'0 active pruub 186.826766968s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[11.1( v 53'48 (0'0,53'48] local-lis/les=61/62 n=1 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=8.943229675s) [0] r=-1 lpr=65 pi=[61,65)/1 crt=53'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.837432861s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[9.3( v 49'12 (0'0,49'12] local-lis/les=59/61 n=1 ec=59/48 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.932456970s) [2] r=-1 lpr=65 pi=[59,65)/1 crt=49'12 lcod 0'0 mlcod 0'0 active pruub 186.826766968s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[9.3( v 49'12 (0'0,49'12] local-lis/les=59/61 n=1 ec=59/48 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.932439804s) [2] r=-1 lpr=65 pi=[59,65)/1 crt=49'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.826766968s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[8.2( v 56'45 (0'0,56'45] local-lis/les=59/61 n=1 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.932498932s) [2] r=-1 lpr=65 pi=[59,65)/1 crt=56'45 lcod 0'0 mlcod 0'0 active pruub 186.826843262s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[9.10( v 49'12 (0'0,49'12] local-lis/les=59/61 n=0 ec=59/48 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.932515144s) [0] r=-1 lpr=65 pi=[59,65)/1 crt=49'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.826766968s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[8.2( v 56'45 (0'0,56'45] local-lis/les=59/61 n=1 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.932484627s) [2] r=-1 lpr=65 pi=[59,65)/1 crt=56'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.826843262s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[8.3( v 56'45 (0'0,56'45] local-lis/les=59/61 n=1 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.932323456s) [2] r=-1 lpr=65 pi=[59,65)/1 crt=56'45 lcod 0'0 mlcod 0'0 active pruub 186.826843262s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[8.3( v 56'45 (0'0,56'45] local-lis/les=59/61 n=1 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.932301521s) [2] r=-1 lpr=65 pi=[59,65)/1 crt=56'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.826843262s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[9.e( v 49'12 (0'0,49'12] local-lis/les=59/61 n=0 ec=59/48 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.932182312s) [0] r=-1 lpr=65 pi=[59,65)/1 crt=49'12 lcod 0'0 mlcod 0'0 active pruub 186.826843262s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[8.15( v 56'45 (0'0,56'45] local-lis/les=59/61 n=0 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.932521820s) [2] r=-1 lpr=65 pi=[59,65)/1 crt=56'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.826293945s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[8.f( v 56'45 (0'0,56'45] local-lis/les=59/61 n=0 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.932400703s) [2] r=-1 lpr=65 pi=[59,65)/1 crt=56'45 lcod 0'0 mlcod 0'0 active pruub 186.827117920s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[9.e( v 49'12 (0'0,49'12] local-lis/les=59/61 n=0 ec=59/48 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.932107925s) [0] r=-1 lpr=65 pi=[59,65)/1 crt=49'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.826843262s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[9.9( v 49'12 (0'0,49'12] local-lis/les=59/61 n=0 ec=59/48 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.932153702s) [2] r=-1 lpr=65 pi=[59,65)/1 crt=49'12 lcod 0'0 mlcod 0'0 active pruub 186.826934814s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[8.f( v 56'45 (0'0,56'45] local-lis/les=59/61 n=0 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.932353973s) [2] r=-1 lpr=65 pi=[59,65)/1 crt=56'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.827117920s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[9.9( v 49'12 (0'0,49'12] local-lis/les=59/61 n=0 ec=59/48 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.932138443s) [2] r=-1 lpr=65 pi=[59,65)/1 crt=49'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.826934814s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[8.8( v 56'45 (0'0,56'45] local-lis/les=59/61 n=0 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.932090759s) [0] r=-1 lpr=65 pi=[59,65)/1 crt=56'45 lcod 0'0 mlcod 0'0 active pruub 186.826965332s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[11.a( v 53'48 (0'0,53'48] local-lis/les=61/62 n=0 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=8.942596436s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=53'48 lcod 0'0 mlcod 0'0 active pruub 179.837600708s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[11.a( v 53'48 (0'0,53'48] local-lis/les=61/62 n=0 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=8.942564964s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=53'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.837600708s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[9.8( v 49'12 (0'0,49'12] local-lis/les=59/61 n=0 ec=59/48 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.932276726s) [2] r=-1 lpr=65 pi=[59,65)/1 crt=49'12 lcod 0'0 mlcod 0'0 active pruub 186.827362061s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[8.8( v 56'45 (0'0,56'45] local-lis/les=59/61 n=0 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.931942940s) [0] r=-1 lpr=65 pi=[59,65)/1 crt=56'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.826965332s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[9.8( v 49'12 (0'0,49'12] local-lis/les=59/61 n=0 ec=59/48 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.932257652s) [2] r=-1 lpr=65 pi=[59,65)/1 crt=49'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.827362061s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[9.b( v 49'12 (0'0,49'12] local-lis/les=59/61 n=0 ec=59/48 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.931882858s) [2] r=-1 lpr=65 pi=[59,65)/1 crt=49'12 lcod 0'0 mlcod 0'0 active pruub 186.827178955s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[9.b( v 49'12 (0'0,49'12] local-lis/les=59/61 n=0 ec=59/48 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.931863785s) [2] r=-1 lpr=65 pi=[59,65)/1 crt=49'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.827178955s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[8.a( v 56'45 (0'0,56'45] local-lis/les=59/61 n=0 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.931719780s) [2] r=-1 lpr=65 pi=[59,65)/1 crt=56'45 lcod 0'0 mlcod 0'0 active pruub 186.827117920s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[8.a( v 56'45 (0'0,56'45] local-lis/les=59/61 n=0 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.931690216s) [2] r=-1 lpr=65 pi=[59,65)/1 crt=56'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.827117920s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[9.f( v 49'12 (0'0,49'12] local-lis/les=59/61 n=0 ec=59/48 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.931821823s) [0] r=-1 lpr=65 pi=[59,65)/1 crt=49'12 lcod 0'0 mlcod 0'0 active pruub 186.827331543s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[9.f( v 49'12 (0'0,49'12] local-lis/les=59/61 n=0 ec=59/48 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.931801796s) [0] r=-1 lpr=65 pi=[59,65)/1 crt=49'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.827331543s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[8.9( v 56'45 (0'0,56'45] local-lis/les=59/61 n=0 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.931291580s) [2] r=-1 lpr=65 pi=[59,65)/1 crt=56'45 lcod 0'0 mlcod 0'0 active pruub 186.826965332s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[11.e( v 62'51 (0'0,62'51] local-lis/les=61/62 n=0 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=8.941934586s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=62'49 lcod 62'50 mlcod 62'50 active pruub 179.837661743s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[8.d( v 56'45 (0'0,56'45] local-lis/les=59/61 n=0 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.931570053s) [2] r=-1 lpr=65 pi=[59,65)/1 crt=56'45 lcod 0'0 mlcod 0'0 active pruub 186.827346802s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[8.d( v 56'45 (0'0,56'45] local-lis/les=59/61 n=0 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.931551933s) [2] r=-1 lpr=65 pi=[59,65)/1 crt=56'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.827346802s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[11.e( v 62'51 (0'0,62'51] local-lis/les=61/62 n=0 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=8.941896439s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=62'49 lcod 62'50 mlcod 0'0 unknown NOTIFY pruub 179.837661743s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[9.d( v 49'12 (0'0,49'12] local-lis/les=59/61 n=0 ec=59/48 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.931307793s) [0] r=-1 lpr=65 pi=[59,65)/1 crt=49'12 lcod 0'0 mlcod 0'0 active pruub 186.827362061s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[11.f( v 53'48 (0'0,53'48] local-lis/les=61/62 n=0 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=8.941629410s) [0] r=-1 lpr=65 pi=[61,65)/1 crt=53'48 lcod 0'0 mlcod 0'0 active pruub 179.837692261s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[9.d( v 49'12 (0'0,49'12] local-lis/les=59/61 n=0 ec=59/48 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.931288719s) [0] r=-1 lpr=65 pi=[59,65)/1 crt=49'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.827362061s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[11.f( v 53'48 (0'0,53'48] local-lis/les=61/62 n=0 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=8.941610336s) [0] r=-1 lpr=65 pi=[61,65)/1 crt=53'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.837692261s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[8.c( v 56'45 (0'0,56'45] local-lis/les=59/61 n=0 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.931130409s) [2] r=-1 lpr=65 pi=[59,65)/1 crt=56'45 lcod 0'0 mlcod 0'0 active pruub 186.827423096s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[8.9( v 56'45 (0'0,56'45] local-lis/les=59/61 n=0 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.931242943s) [2] r=-1 lpr=65 pi=[59,65)/1 crt=56'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.826965332s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[11.8( v 53'48 (0'0,53'48] local-lis/les=61/62 n=1 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=8.941375732s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=53'48 lcod 0'0 mlcod 0'0 active pruub 179.837738037s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[9.a( v 49'12 (0'0,49'12] local-lis/les=59/61 n=0 ec=59/48 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.931086540s) [0] r=-1 lpr=65 pi=[59,65)/1 crt=49'12 lcod 0'0 mlcod 0'0 active pruub 186.827453613s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[11.8( v 53'48 (0'0,53'48] local-lis/les=61/62 n=1 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=8.941356659s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=53'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.837738037s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[9.a( v 49'12 (0'0,49'12] local-lis/les=59/61 n=0 ec=59/48 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.931069374s) [0] r=-1 lpr=65 pi=[59,65)/1 crt=49'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.827453613s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[8.c( v 56'45 (0'0,56'45] local-lis/les=59/61 n=0 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.930910110s) [2] r=-1 lpr=65 pi=[59,65)/1 crt=56'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.827423096s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[8.b( v 56'45 (0'0,56'45] local-lis/les=59/61 n=0 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.931299210s) [2] r=-1 lpr=65 pi=[59,65)/1 crt=56'45 lcod 0'0 mlcod 0'0 active pruub 186.827819824s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[8.b( v 56'45 (0'0,56'45] local-lis/les=59/61 n=0 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.931254387s) [2] r=-1 lpr=65 pi=[59,65)/1 crt=56'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.827819824s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[11.3( v 62'51 (0'0,62'51] local-lis/les=61/62 n=1 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=8.940926552s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=62'49 lcod 62'50 mlcod 62'50 active pruub 179.837738037s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[11.4( v 53'48 (0'0,53'48] local-lis/les=61/62 n=1 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=8.940913200s) [0] r=-1 lpr=65 pi=[61,65)/1 crt=53'48 lcod 0'0 mlcod 0'0 active pruub 179.837783813s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[11.3( v 62'51 (0'0,62'51] local-lis/les=61/62 n=1 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=8.940883636s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=62'49 lcod 62'50 mlcod 0'0 unknown NOTIFY pruub 179.837738037s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[11.4( v 53'48 (0'0,53'48] local-lis/les=61/62 n=1 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=8.940879822s) [0] r=-1 lpr=65 pi=[61,65)/1 crt=53'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.837783813s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[9.6( v 49'12 (0'0,49'12] local-lis/les=59/61 n=1 ec=59/48 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.930724144s) [0] r=-1 lpr=65 pi=[59,65)/1 crt=49'12 lcod 0'0 mlcod 0'0 active pruub 186.827804565s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[9.6( v 49'12 (0'0,49'12] local-lis/les=59/61 n=1 ec=59/48 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.930666924s) [0] r=-1 lpr=65 pi=[59,65)/1 crt=49'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.827804565s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[11.5( v 53'48 (0'0,53'48] local-lis/les=61/62 n=1 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=8.940687180s) [0] r=-1 lpr=65 pi=[61,65)/1 crt=53'48 lcod 0'0 mlcod 0'0 active pruub 179.837844849s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[9.7( v 49'12 (0'0,49'12] local-lis/les=59/61 n=0 ec=59/48 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.930531502s) [2] r=-1 lpr=65 pi=[59,65)/1 crt=49'12 lcod 0'0 mlcod 0'0 active pruub 186.827758789s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[11.5( v 53'48 (0'0,53'48] local-lis/les=61/62 n=1 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=8.940638542s) [0] r=-1 lpr=65 pi=[61,65)/1 crt=53'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.837844849s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[8.6( v 56'45 (0'0,56'45] local-lis/les=59/61 n=0 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.937438965s) [2] r=-1 lpr=65 pi=[59,65)/1 crt=56'45 lcod 0'0 mlcod 0'0 active pruub 186.834777832s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[8.6( v 56'45 (0'0,56'45] local-lis/les=59/61 n=0 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.937411308s) [2] r=-1 lpr=65 pi=[59,65)/1 crt=56'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.834777832s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[8.5( v 56'45 (0'0,56'45] local-lis/les=59/61 n=1 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.930406570s) [2] r=-1 lpr=65 pi=[59,65)/1 crt=56'45 lcod 0'0 mlcod 0'0 active pruub 186.827835083s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[8.5( v 56'45 (0'0,56'45] local-lis/les=59/61 n=1 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.930383682s) [2] r=-1 lpr=65 pi=[59,65)/1 crt=56'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.827835083s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[11.7( v 53'48 (0'0,53'48] local-lis/les=61/62 n=1 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=8.940284729s) [0] r=-1 lpr=65 pi=[61,65)/1 crt=53'48 lcod 0'0 mlcod 0'0 active pruub 179.837875366s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[9.5( v 49'12 (0'0,49'12] local-lis/les=59/61 n=1 ec=59/48 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.930240631s) [2] r=-1 lpr=65 pi=[59,65)/1 crt=49'12 lcod 0'0 mlcod 0'0 active pruub 186.827835083s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[11.7( v 53'48 (0'0,53'48] local-lis/les=61/62 n=1 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=8.940267563s) [0] r=-1 lpr=65 pi=[61,65)/1 crt=53'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.837875366s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[9.5( v 49'12 (0'0,49'12] local-lis/les=59/61 n=1 ec=59/48 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.930221558s) [2] r=-1 lpr=65 pi=[59,65)/1 crt=49'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.827835083s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[8.4( v 56'45 (0'0,56'45] local-lis/les=59/61 n=1 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.930262566s) [0] r=-1 lpr=65 pi=[59,65)/1 crt=56'45 lcod 0'0 mlcod 0'0 active pruub 186.827911377s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[8.4( v 56'45 (0'0,56'45] local-lis/les=59/61 n=1 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.930247307s) [0] r=-1 lpr=65 pi=[59,65)/1 crt=56'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.827911377s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[8.1b( v 56'45 (0'0,56'45] local-lis/les=59/61 n=0 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.937071800s) [0] r=-1 lpr=65 pi=[59,65)/1 crt=56'45 lcod 0'0 mlcod 0'0 active pruub 186.834899902s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[11.19( v 53'48 (0'0,53'48] local-lis/les=61/62 n=0 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=8.940012932s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=53'48 lcod 0'0 mlcod 0'0 active pruub 179.837860107s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[8.1b( v 56'45 (0'0,56'45] local-lis/les=59/61 n=0 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.937056541s) [0] r=-1 lpr=65 pi=[59,65)/1 crt=56'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.834899902s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[11.19( v 53'48 (0'0,53'48] local-lis/les=61/62 n=0 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=8.939988136s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=53'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.837860107s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[9.18( v 49'12 (0'0,49'12] local-lis/les=59/61 n=0 ec=59/48 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.936841965s) [2] r=-1 lpr=65 pi=[59,65)/1 crt=49'12 lcod 0'0 mlcod 0'0 active pruub 186.834838867s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[11.1a( v 53'48 (0'0,53'48] local-lis/les=61/62 n=0 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=8.939873695s) [0] r=-1 lpr=65 pi=[61,65)/1 crt=53'48 lcod 0'0 mlcod 0'0 active pruub 179.837875366s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[9.18( v 49'12 (0'0,49'12] local-lis/les=59/61 n=0 ec=59/48 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.936828613s) [2] r=-1 lpr=65 pi=[59,65)/1 crt=49'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.834838867s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[11.1a( v 53'48 (0'0,53'48] local-lis/les=61/62 n=0 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=8.939853668s) [0] r=-1 lpr=65 pi=[61,65)/1 crt=53'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.837875366s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[11.1b( v 53'48 (0'0,53'48] local-lis/les=61/62 n=0 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=8.939834595s) [0] r=-1 lpr=65 pi=[61,65)/1 crt=53'48 lcod 0'0 mlcod 0'0 active pruub 179.837936401s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[8.18( v 56'45 (0'0,56'45] local-lis/les=59/61 n=0 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.936674118s) [0] r=-1 lpr=65 pi=[59,65)/1 crt=56'45 lcod 0'0 mlcod 0'0 active pruub 186.834838867s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[11.1b( v 53'48 (0'0,53'48] local-lis/les=61/62 n=0 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=8.939805031s) [0] r=-1 lpr=65 pi=[61,65)/1 crt=53'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.837936401s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[8.19( v 56'45 (0'0,56'45] local-lis/les=59/61 n=0 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.936667442s) [0] r=-1 lpr=65 pi=[59,65)/1 crt=56'45 lcod 0'0 mlcod 0'0 active pruub 186.834838867s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[8.18( v 56'45 (0'0,56'45] local-lis/les=59/61 n=0 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.936659813s) [0] r=-1 lpr=65 pi=[59,65)/1 crt=56'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.834838867s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[8.19( v 56'45 (0'0,56'45] local-lis/les=59/61 n=0 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.936653137s) [0] r=-1 lpr=65 pi=[59,65)/1 crt=56'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.834838867s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[11.1c( v 53'48 (0'0,53'48] local-lis/les=61/62 n=0 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=8.939710617s) [0] r=-1 lpr=65 pi=[61,65)/1 crt=53'48 lcod 0'0 mlcod 0'0 active pruub 179.837936401s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[11.1c( v 53'48 (0'0,53'48] local-lis/les=61/62 n=0 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=8.939693451s) [0] r=-1 lpr=65 pi=[61,65)/1 crt=53'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.837936401s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[8.1f( v 56'45 (0'0,56'45] local-lis/les=59/61 n=0 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.936522484s) [2] r=-1 lpr=65 pi=[59,65)/1 crt=56'45 lcod 0'0 mlcod 0'0 active pruub 186.834869385s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[8.1f( v 56'45 (0'0,56'45] local-lis/les=59/61 n=0 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.936508179s) [2] r=-1 lpr=65 pi=[59,65)/1 crt=56'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.834869385s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[9.1d( v 49'12 (0'0,49'12] local-lis/les=59/61 n=0 ec=59/48 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.936566353s) [2] r=-1 lpr=65 pi=[59,65)/1 crt=49'12 lcod 0'0 mlcod 0'0 active pruub 186.835128784s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[9.7( v 49'12 (0'0,49'12] local-lis/les=59/61 n=0 ec=59/48 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.930131912s) [2] r=-1 lpr=65 pi=[59,65)/1 crt=49'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.827758789s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[11.1d( v 53'48 (0'0,53'48] local-lis/les=61/62 n=0 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=8.939393997s) [0] r=-1 lpr=65 pi=[61,65)/1 crt=53'48 lcod 0'0 mlcod 0'0 active pruub 179.838043213s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[11.1d( v 53'48 (0'0,53'48] local-lis/les=61/62 n=0 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=8.939376831s) [0] r=-1 lpr=65 pi=[61,65)/1 crt=53'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.838043213s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[8.1c( v 56'45 (0'0,56'45] local-lis/les=59/61 n=0 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.936406136s) [2] r=-1 lpr=65 pi=[59,65)/1 crt=56'45 lcod 0'0 mlcod 0'0 active pruub 186.835144043s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[8.1c( v 56'45 (0'0,56'45] local-lis/les=59/61 n=0 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.936386108s) [2] r=-1 lpr=65 pi=[59,65)/1 crt=56'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.835144043s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[9.12( v 49'12 (0'0,49'12] local-lis/les=59/61 n=0 ec=59/48 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.936223030s) [0] r=-1 lpr=65 pi=[59,65)/1 crt=49'12 lcod 0'0 mlcod 0'0 active pruub 186.835128784s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[9.12( v 49'12 (0'0,49'12] local-lis/les=59/61 n=0 ec=59/48 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.936207771s) [0] r=-1 lpr=65 pi=[59,65)/1 crt=49'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.835128784s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[9.1d( v 49'12 (0'0,49'12] local-lis/les=59/61 n=0 ec=59/48 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.936553955s) [2] r=-1 lpr=65 pi=[59,65)/1 crt=49'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.835128784s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[11.1e( v 53'48 (0'0,53'48] local-lis/les=61/62 n=0 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=8.938913345s) [0] r=-1 lpr=65 pi=[61,65)/1 crt=53'48 lcod 0'0 mlcod 0'0 active pruub 179.837997437s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[9.13( v 49'12 (0'0,49'12] local-lis/les=59/61 n=0 ec=59/48 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.936248779s) [2] r=-1 lpr=65 pi=[59,65)/1 crt=49'12 lcod 0'0 mlcod 0'0 active pruub 186.835357666s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[11.1e( v 53'48 (0'0,53'48] local-lis/les=61/62 n=0 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=8.938894272s) [0] r=-1 lpr=65 pi=[61,65)/1 crt=53'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.837997437s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[9.13( v 49'12 (0'0,49'12] local-lis/les=59/61 n=0 ec=59/48 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.936231613s) [2] r=-1 lpr=65 pi=[59,65)/1 crt=49'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.835357666s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[8.12( v 56'45 (0'0,56'45] local-lis/les=59/61 n=0 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.936066628s) [0] r=-1 lpr=65 pi=[59,65)/1 crt=56'45 lcod 0'0 mlcod 0'0 active pruub 186.835205078s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 65 pg[8.12( v 56'45 (0'0,56'45] local-lis/les=59/61 n=0 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=65 pruub=15.936048508s) [0] r=-1 lpr=65 pi=[59,65)/1 crt=56'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.835205078s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:10 compute-0 ceph-mon[74426]: 10.1f scrub starts
Dec 11 09:17:10 compute-0 ceph-mon[74426]: 10.1f scrub ok
Dec 11 09:17:10 compute-0 ceph-mon[74426]: pgmap v60: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:17:10 compute-0 ceph-mon[74426]: 9.10 scrub starts
Dec 11 09:17:10 compute-0 ceph-mon[74426]: 9.10 scrub ok
Dec 11 09:17:10 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 11 09:17:10 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 11 09:17:10 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 11 09:17:10 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Dec 11 09:17:10 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 11 09:17:10 compute-0 ceph-mon[74426]: osdmap e65: 3 total, 3 up, 3 in
Dec 11 09:17:10 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:10 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef58001820 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:10 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Dec 11 09:17:10 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Dec 11 09:17:10 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Dec 11 09:17:10 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v62: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:17:10 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0)
Dec 11 09:17:10 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Dec 11 09:17:10 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:10 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef54001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:11 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:11 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef60002b70 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:11 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Dec 11 09:17:11 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Dec 11 09:17:11 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 66 pg[12.12( empty local-lis/les=65/66 n=0 ec=63/54 lis/c=63/63 les/c/f=64/64/0 sis=65) [1] r=0 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:11 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 66 pg[12.6( empty local-lis/les=65/66 n=0 ec=63/54 lis/c=63/63 les/c/f=64/64/0 sis=65) [1] r=0 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:11 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 66 pg[12.b( empty local-lis/les=65/66 n=0 ec=63/54 lis/c=63/63 les/c/f=64/64/0 sis=65) [1] r=0 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:11 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 66 pg[12.10( empty local-lis/les=65/66 n=0 ec=63/54 lis/c=63/63 les/c/f=64/64/0 sis=65) [1] r=0 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:11 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 66 pg[12.c( empty local-lis/les=65/66 n=0 ec=63/54 lis/c=63/63 les/c/f=64/64/0 sis=65) [1] r=0 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:11 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 66 pg[12.a( empty local-lis/les=65/66 n=0 ec=63/54 lis/c=63/63 les/c/f=64/64/0 sis=65) [1] r=0 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:11 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 66 pg[12.8( empty local-lis/les=65/66 n=0 ec=63/54 lis/c=63/63 les/c/f=64/64/0 sis=65) [1] r=0 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:11 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 66 pg[12.e( empty local-lis/les=65/66 n=0 ec=63/54 lis/c=63/63 les/c/f=64/64/0 sis=65) [1] r=0 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:11 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 66 pg[12.1c( empty local-lis/les=65/66 n=0 ec=63/54 lis/c=63/63 les/c/f=64/64/0 sis=65) [1] r=0 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:11 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 66 pg[12.19( empty local-lis/les=65/66 n=0 ec=63/54 lis/c=63/63 les/c/f=64/64/0 sis=65) [1] r=0 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:11 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Dec 11 09:17:11 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Dec 11 09:17:12 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Dec 11 09:17:12 compute-0 ceph-mgr[74715]: [progress INFO root] Completed event 6f9bc7c5-0dd7-45ef-95b7-8699965b82d0 (Global Recovery Event) in 10 seconds
Dec 11 09:17:12 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:12 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef68002910 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:12 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Dec 11 09:17:12 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Dec 11 09:17:12 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v64: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:17:12 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0)
Dec 11 09:17:12 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Dec 11 09:17:12 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:12 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef58002cb0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:13 compute-0 ceph-mon[74426]: 10.16 deep-scrub starts
Dec 11 09:17:13 compute-0 ceph-mon[74426]: 10.16 deep-scrub ok
Dec 11 09:17:13 compute-0 ceph-mon[74426]: 9.14 scrub starts
Dec 11 09:17:13 compute-0 ceph-mon[74426]: 9.14 scrub ok
Dec 11 09:17:13 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Dec 11 09:17:13 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Dec 11 09:17:13 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Dec 11 09:17:13 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Dec 11 09:17:13 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 67 pg[10.16( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=67) [1] r=0 lpr=67 pi=[61,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:13 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 67 pg[10.2( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=67) [1] r=0 lpr=67 pi=[61,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:13 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 67 pg[10.e( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=67) [1] r=0 lpr=67 pi=[61,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:13 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 67 pg[10.a( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=67) [1] r=0 lpr=67 pi=[61,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:13 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 67 pg[10.6( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=67) [1] r=0 lpr=67 pi=[61,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:13 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 67 pg[10.1e( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=67) [1] r=0 lpr=67 pi=[61,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:13 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 67 pg[10.1a( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=67) [1] r=0 lpr=67 pi=[61,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:13 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 67 pg[10.12( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=67) [1] r=0 lpr=67 pi=[61,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:13 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 11 09:17:13 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:13 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 11 09:17:13 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:13 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec 11 09:17:13 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:13 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 11 09:17:13 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 11 09:17:13 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec 11 09:17:13 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec 11 09:17:13 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 11 09:17:13 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 11 09:17:13 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-0.ewssxv on compute-0
Dec 11 09:17:13 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-0.ewssxv on compute-0
Dec 11 09:17:13 compute-0 sudo[95366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:17:13 compute-0 sudo[95366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:17:13 compute-0 sudo[95366]: pam_unix(sudo:session): session closed for user root
Dec 11 09:17:13 compute-0 sudo[95391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/keepalived:2.2.4 --timeout 895 _orch deploy --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:17:13 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:13 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef54003340 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:13 compute-0 sudo[95391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:17:13 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 11.0 scrub starts
Dec 11 09:17:13 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 11.0 scrub ok
Dec 11 09:17:14 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e67 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:17:14 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Dec 11 09:17:14 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Dec 11 09:17:14 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Dec 11 09:17:14 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Dec 11 09:17:14 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 68 pg[10.16( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=68) [1]/[0] r=-1 lpr=68 pi=[61,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:14 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 68 pg[10.16( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=68) [1]/[0] r=-1 lpr=68 pi=[61,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:14 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 68 pg[10.a( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=68) [1]/[0] r=-1 lpr=68 pi=[61,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:14 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 68 pg[10.a( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=68) [1]/[0] r=-1 lpr=68 pi=[61,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:14 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 68 pg[10.12( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=68) [1]/[0] r=-1 lpr=68 pi=[61,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:14 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 68 pg[10.12( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=68) [1]/[0] r=-1 lpr=68 pi=[61,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:14 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 68 pg[10.e( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=68) [1]/[0] r=-1 lpr=68 pi=[61,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:14 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 68 pg[10.e( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=68) [1]/[0] r=-1 lpr=68 pi=[61,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:14 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 68 pg[10.2( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=68) [1]/[0] r=-1 lpr=68 pi=[61,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:14 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 68 pg[10.2( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=68) [1]/[0] r=-1 lpr=68 pi=[61,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:14 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 68 pg[10.6( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=68) [1]/[0] r=-1 lpr=68 pi=[61,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:14 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 68 pg[10.6( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=68) [1]/[0] r=-1 lpr=68 pi=[61,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:14 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 68 pg[10.1a( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=68) [1]/[0] r=-1 lpr=68 pi=[61,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:14 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 68 pg[10.1a( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=68) [1]/[0] r=-1 lpr=68 pi=[61,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:14 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 68 pg[10.1e( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=68) [1]/[0] r=-1 lpr=68 pi=[61,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:14 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 68 pg[10.1e( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=68) [1]/[0] r=-1 lpr=68 pi=[61,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:14 compute-0 ceph-mon[74426]: 10.14 scrub starts
Dec 11 09:17:14 compute-0 ceph-mon[74426]: 10.14 scrub ok
Dec 11 09:17:14 compute-0 ceph-mon[74426]: pgmap v62: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:17:14 compute-0 ceph-mon[74426]: osdmap e66: 3 total, 3 up, 3 in
Dec 11 09:17:14 compute-0 ceph-mon[74426]: 11.15 scrub starts
Dec 11 09:17:14 compute-0 ceph-mon[74426]: 11.15 scrub ok
Dec 11 09:17:14 compute-0 ceph-mon[74426]: 9.2 scrub starts
Dec 11 09:17:14 compute-0 ceph-mon[74426]: 9.2 scrub ok
Dec 11 09:17:14 compute-0 ceph-mon[74426]: 8.18 scrub starts
Dec 11 09:17:14 compute-0 ceph-mon[74426]: 8.18 scrub ok
Dec 11 09:17:14 compute-0 ceph-mon[74426]: pgmap v64: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:17:14 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Dec 11 09:17:14 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Dec 11 09:17:14 compute-0 ceph-mon[74426]: osdmap e67: 3 total, 3 up, 3 in
Dec 11 09:17:14 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:14 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:14 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:14 compute-0 ceph-mon[74426]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 11 09:17:14 compute-0 ceph-mon[74426]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec 11 09:17:14 compute-0 ceph-mon[74426]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 11 09:17:14 compute-0 ceph-mon[74426]: Deploying daemon keepalived.nfs.cephfs.compute-0.ewssxv on compute-0
Dec 11 09:17:14 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:14 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef60002b70 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:14 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 11.c scrub starts
Dec 11 09:17:14 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 11.c scrub ok
Dec 11 09:17:14 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-haproxy-nfs-cephfs-compute-0-qtoxfz[95274]: [WARNING] 344/091714 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 11 09:17:14 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v67: 353 pgs: 16 unknown, 46 peering, 291 active+clean; 455 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 219 B/s, 1 objects/s recovering
Dec 11 09:17:14 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:14 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef68002910 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:15 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Dec 11 09:17:15 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Dec 11 09:17:15 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Dec 11 09:17:15 compute-0 ceph-mon[74426]: 11.0 scrub starts
Dec 11 09:17:15 compute-0 ceph-mon[74426]: 11.0 scrub ok
Dec 11 09:17:15 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Dec 11 09:17:15 compute-0 ceph-mon[74426]: osdmap e68: 3 total, 3 up, 3 in
Dec 11 09:17:15 compute-0 ceph-mon[74426]: osdmap e69: 3 total, 3 up, 3 in
Dec 11 09:17:15 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:15 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef58002cb0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:15 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 11.b scrub starts
Dec 11 09:17:15 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 11.b scrub ok
Dec 11 09:17:16 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Dec 11 09:17:16 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Dec 11 09:17:16 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Dec 11 09:17:16 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 70 pg[10.1e( v 56'1088 (0'0,56'1088] local-lis/les=0/0 n=5 ec=61/50 lis/c=68/61 les/c/f=69/62/0 sis=70) [1] r=0 lpr=70 pi=[61,70)/1 luod=0'0 crt=56'1088 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:16 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 70 pg[10.1e( v 56'1088 (0'0,56'1088] local-lis/les=0/0 n=5 ec=61/50 lis/c=68/61 les/c/f=69/62/0 sis=70) [1] r=0 lpr=70 pi=[61,70)/1 crt=56'1088 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:16 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 70 pg[10.1a( v 56'1088 (0'0,56'1088] local-lis/les=0/0 n=4 ec=61/50 lis/c=68/61 les/c/f=69/62/0 sis=70) [1] r=0 lpr=70 pi=[61,70)/1 luod=0'0 crt=56'1088 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:16 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 70 pg[10.1a( v 56'1088 (0'0,56'1088] local-lis/les=0/0 n=4 ec=61/50 lis/c=68/61 les/c/f=69/62/0 sis=70) [1] r=0 lpr=70 pi=[61,70)/1 crt=56'1088 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:16 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 70 pg[10.2( v 56'1088 (0'0,56'1088] local-lis/les=0/0 n=6 ec=61/50 lis/c=68/61 les/c/f=69/62/0 sis=70) [1] r=0 lpr=70 pi=[61,70)/1 luod=0'0 crt=56'1088 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:16 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 70 pg[10.e( v 56'1088 (0'0,56'1088] local-lis/les=0/0 n=6 ec=61/50 lis/c=68/61 les/c/f=69/62/0 sis=70) [1] r=0 lpr=70 pi=[61,70)/1 luod=0'0 crt=56'1088 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:16 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 70 pg[10.2( v 56'1088 (0'0,56'1088] local-lis/les=0/0 n=6 ec=61/50 lis/c=68/61 les/c/f=69/62/0 sis=70) [1] r=0 lpr=70 pi=[61,70)/1 crt=56'1088 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:16 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 70 pg[10.e( v 56'1088 (0'0,56'1088] local-lis/les=0/0 n=6 ec=61/50 lis/c=68/61 les/c/f=69/62/0 sis=70) [1] r=0 lpr=70 pi=[61,70)/1 crt=56'1088 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:16 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 70 pg[10.a( v 56'1088 (0'0,56'1088] local-lis/les=0/0 n=6 ec=61/50 lis/c=68/61 les/c/f=69/62/0 sis=70) [1] r=0 lpr=70 pi=[61,70)/1 luod=0'0 crt=56'1088 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:16 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 70 pg[10.a( v 56'1088 (0'0,56'1088] local-lis/les=0/0 n=6 ec=61/50 lis/c=68/61 les/c/f=69/62/0 sis=70) [1] r=0 lpr=70 pi=[61,70)/1 crt=56'1088 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:16 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 70 pg[10.12( v 56'1088 (0'0,56'1088] local-lis/les=0/0 n=6 ec=61/50 lis/c=68/61 les/c/f=69/62/0 sis=70) [1] r=0 lpr=70 pi=[61,70)/1 luod=0'0 crt=56'1088 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:16 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 70 pg[10.12( v 56'1088 (0'0,56'1088] local-lis/les=0/0 n=6 ec=61/50 lis/c=68/61 les/c/f=69/62/0 sis=70) [1] r=0 lpr=70 pi=[61,70)/1 crt=56'1088 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:16 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 70 pg[10.16( v 56'1088 (0'0,56'1088] local-lis/les=0/0 n=4 ec=61/50 lis/c=68/61 les/c/f=69/62/0 sis=70) [1] r=0 lpr=70 pi=[61,70)/1 luod=0'0 crt=56'1088 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:16 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 70 pg[10.16( v 56'1088 (0'0,56'1088] local-lis/les=0/0 n=4 ec=61/50 lis/c=68/61 les/c/f=69/62/0 sis=70) [1] r=0 lpr=70 pi=[61,70)/1 crt=56'1088 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:16 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 70 pg[10.6( v 56'1088 (0'0,56'1088] local-lis/les=0/0 n=6 ec=61/50 lis/c=68/61 les/c/f=69/62/0 sis=70) [1] r=0 lpr=70 pi=[61,70)/1 luod=0'0 crt=56'1088 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:16 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 70 pg[10.6( v 56'1088 (0'0,56'1088] local-lis/les=0/0 n=6 ec=61/50 lis/c=68/61 les/c/f=69/62/0 sis=70) [1] r=0 lpr=70 pi=[61,70)/1 crt=56'1088 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:16 compute-0 ceph-mon[74426]: 11.c scrub starts
Dec 11 09:17:16 compute-0 ceph-mon[74426]: 11.c scrub ok
Dec 11 09:17:16 compute-0 ceph-mon[74426]: 10.0 scrub starts
Dec 11 09:17:16 compute-0 ceph-mon[74426]: 10.0 scrub ok
Dec 11 09:17:16 compute-0 ceph-mon[74426]: pgmap v67: 353 pgs: 16 unknown, 46 peering, 291 active+clean; 455 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 219 B/s, 1 objects/s recovering
Dec 11 09:17:16 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:16 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef54003340 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:16 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Dec 11 09:17:16 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Dec 11 09:17:16 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v70: 353 pgs: 16 unknown, 46 peering, 291 active+clean; 455 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 282 B/s, 2 objects/s recovering
Dec 11 09:17:16 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:16 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef60002b70 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:17 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Dec 11 09:17:17 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Dec 11 09:17:17 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Dec 11 09:17:17 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 71 pg[10.12( v 56'1088 (0'0,56'1088] local-lis/les=70/71 n=6 ec=61/50 lis/c=68/61 les/c/f=69/62/0 sis=70) [1] r=0 lpr=70 pi=[61,70)/1 crt=56'1088 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:17 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 71 pg[10.a( v 56'1088 (0'0,56'1088] local-lis/les=70/71 n=6 ec=61/50 lis/c=68/61 les/c/f=69/62/0 sis=70) [1] r=0 lpr=70 pi=[61,70)/1 crt=56'1088 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:17 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 71 pg[10.16( v 56'1088 (0'0,56'1088] local-lis/les=70/71 n=4 ec=61/50 lis/c=68/61 les/c/f=69/62/0 sis=70) [1] r=0 lpr=70 pi=[61,70)/1 crt=56'1088 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:17 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 71 pg[10.e( v 56'1088 (0'0,56'1088] local-lis/les=70/71 n=6 ec=61/50 lis/c=68/61 les/c/f=69/62/0 sis=70) [1] r=0 lpr=70 pi=[61,70)/1 crt=56'1088 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:17 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 71 pg[10.2( v 56'1088 (0'0,56'1088] local-lis/les=70/71 n=6 ec=61/50 lis/c=68/61 les/c/f=69/62/0 sis=70) [1] r=0 lpr=70 pi=[61,70)/1 crt=56'1088 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:17 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 71 pg[10.6( v 56'1088 (0'0,56'1088] local-lis/les=70/71 n=6 ec=61/50 lis/c=68/61 les/c/f=69/62/0 sis=70) [1] r=0 lpr=70 pi=[61,70)/1 crt=56'1088 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:17 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 71 pg[10.1a( v 56'1088 (0'0,56'1088] local-lis/les=70/71 n=4 ec=61/50 lis/c=68/61 les/c/f=69/62/0 sis=70) [1] r=0 lpr=70 pi=[61,70)/1 crt=56'1088 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:17 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 71 pg[10.1e( v 56'1088 (0'0,56'1088] local-lis/les=70/71 n=5 ec=61/50 lis/c=68/61 les/c/f=69/62/0 sis=70) [1] r=0 lpr=70 pi=[61,70)/1 crt=56'1088 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:17 compute-0 ceph-mon[74426]: 10.13 scrub starts
Dec 11 09:17:17 compute-0 ceph-mon[74426]: 10.13 scrub ok
Dec 11 09:17:17 compute-0 ceph-mon[74426]: 11.b scrub starts
Dec 11 09:17:17 compute-0 ceph-mon[74426]: 11.b scrub ok
Dec 11 09:17:17 compute-0 ceph-mon[74426]: osdmap e70: 3 total, 3 up, 3 in
Dec 11 09:17:17 compute-0 ceph-mon[74426]: 10.c scrub starts
Dec 11 09:17:17 compute-0 ceph-mon[74426]: 10.c scrub ok
Dec 11 09:17:17 compute-0 ceph-mon[74426]: 11.9 scrub starts
Dec 11 09:17:17 compute-0 ceph-mon[74426]: 11.9 scrub ok
Dec 11 09:17:17 compute-0 ceph-mon[74426]: osdmap e71: 3 total, 3 up, 3 in
Dec 11 09:17:17 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:17 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef68002910 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:17 compute-0 ceph-mgr[74715]: [progress INFO root] Writing back 24 completed events
Dec 11 09:17:17 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 11 09:17:17 compute-0 sudo[95565]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czurpqfotpwzlxjrtughefsunzcvzicp ; /usr/bin/python3'
Dec 11 09:17:17 compute-0 sudo[95565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:17:17 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 11.d scrub starts
Dec 11 09:17:17 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:17 compute-0 ceph-mgr[74715]: [progress WARNING root] Starting Global Recovery Event,62 pgs not in active + clean state
Dec 11 09:17:17 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 11.d scrub ok
Dec 11 09:17:17 compute-0 podman[95456]: 2025-12-11 09:17:17.759807498 +0000 UTC m=+4.019606672 container create 2f02b2cdbba284fd4f35a715030d973ed1062187fa312902216bb474b48427fd (image=quay.io/ceph/keepalived:2.2.4, name=youthful_bose, vendor=Red Hat, Inc., version=2.2.4, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-type=git, io.buildah.version=1.28.2, architecture=x86_64, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793)
Dec 11 09:17:17 compute-0 podman[95456]: 2025-12-11 09:17:17.745048149 +0000 UTC m=+4.004847343 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Dec 11 09:17:17 compute-0 systemd[1]: Started libpod-conmon-2f02b2cdbba284fd4f35a715030d973ed1062187fa312902216bb474b48427fd.scope.
Dec 11 09:17:17 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:17:17 compute-0 podman[95456]: 2025-12-11 09:17:17.841918717 +0000 UTC m=+4.101717921 container init 2f02b2cdbba284fd4f35a715030d973ed1062187fa312902216bb474b48427fd (image=quay.io/ceph/keepalived:2.2.4, name=youthful_bose, vcs-type=git, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, release=1793, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, name=keepalived, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, io.buildah.version=1.28.2, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph.)
Dec 11 09:17:17 compute-0 podman[95456]: 2025-12-11 09:17:17.854368583 +0000 UTC m=+4.114167757 container start 2f02b2cdbba284fd4f35a715030d973ed1062187fa312902216bb474b48427fd (image=quay.io/ceph/keepalived:2.2.4, name=youthful_bose, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, name=keepalived, com.redhat.component=keepalived-container, distribution-scope=public, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, release=1793)
Dec 11 09:17:17 compute-0 python3[95576]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps --daemon-type rgw --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:17:17 compute-0 podman[95456]: 2025-12-11 09:17:17.858035517 +0000 UTC m=+4.117834691 container attach 2f02b2cdbba284fd4f35a715030d973ed1062187fa312902216bb474b48427fd (image=quay.io/ceph/keepalived:2.2.4, name=youthful_bose, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, io.buildah.version=1.28.2, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, description=keepalived for Ceph)
Dec 11 09:17:17 compute-0 youthful_bose[95578]: 0 0
Dec 11 09:17:17 compute-0 systemd[1]: libpod-2f02b2cdbba284fd4f35a715030d973ed1062187fa312902216bb474b48427fd.scope: Deactivated successfully.
Dec 11 09:17:17 compute-0 podman[95456]: 2025-12-11 09:17:17.860997419 +0000 UTC m=+4.120796603 container died 2f02b2cdbba284fd4f35a715030d973ed1062187fa312902216bb474b48427fd (image=quay.io/ceph/keepalived:2.2.4, name=youthful_bose, distribution-scope=public, vcs-type=git, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, io.buildah.version=1.28.2, vendor=Red Hat, Inc., architecture=x86_64, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4)
Dec 11 09:17:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a179758d3d42cea441c28ff45cf88f970e03f8fe0650a13457806614a7a1d65-merged.mount: Deactivated successfully.
Dec 11 09:17:17 compute-0 podman[95456]: 2025-12-11 09:17:17.901592629 +0000 UTC m=+4.161391803 container remove 2f02b2cdbba284fd4f35a715030d973ed1062187fa312902216bb474b48427fd (image=quay.io/ceph/keepalived:2.2.4, name=youthful_bose, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, version=2.2.4, com.redhat.component=keepalived-container, vendor=Red Hat, Inc.)
Dec 11 09:17:17 compute-0 systemd[1]: libpod-conmon-2f02b2cdbba284fd4f35a715030d973ed1062187fa312902216bb474b48427fd.scope: Deactivated successfully.
Dec 11 09:17:17 compute-0 podman[95583]: 2025-12-11 09:17:17.930000641 +0000 UTC m=+0.057889558 container create e77c7a326c9cfba15418387cdbb5304f5e83bbf8d9c3a3817b6fbfb48593661f (image=quay.io/ceph/ceph:v19, name=elegant_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:17:17 compute-0 systemd[1]: Started libpod-conmon-e77c7a326c9cfba15418387cdbb5304f5e83bbf8d9c3a3817b6fbfb48593661f.scope.
Dec 11 09:17:17 compute-0 systemd[1]: Reloading.
Dec 11 09:17:18 compute-0 podman[95583]: 2025-12-11 09:17:17.912286202 +0000 UTC m=+0.040175139 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:17:18 compute-0 systemd-rc-local-generator[95644]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:17:18 compute-0 systemd-sysv-generator[95648]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:17:18 compute-0 ceph-mon[74426]: 10.1 scrub starts
Dec 11 09:17:18 compute-0 ceph-mon[74426]: 10.1 scrub ok
Dec 11 09:17:18 compute-0 ceph-mon[74426]: 10.8 scrub starts
Dec 11 09:17:18 compute-0 ceph-mon[74426]: 10.8 scrub ok
Dec 11 09:17:18 compute-0 ceph-mon[74426]: pgmap v70: 353 pgs: 16 unknown, 46 peering, 291 active+clean; 455 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 282 B/s, 2 objects/s recovering
Dec 11 09:17:18 compute-0 ceph-mon[74426]: 11.d scrub starts
Dec 11 09:17:18 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:18 compute-0 ceph-mon[74426]: 11.d scrub ok
Dec 11 09:17:18 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:17:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa70c31b4567c6303aa641c0194bea0c4b43029f6a13863d4d18f1f9346c6e45/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:17:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa70c31b4567c6303aa641c0194bea0c4b43029f6a13863d4d18f1f9346c6e45/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:17:18 compute-0 podman[95583]: 2025-12-11 09:17:18.281365248 +0000 UTC m=+0.409254175 container init e77c7a326c9cfba15418387cdbb5304f5e83bbf8d9c3a3817b6fbfb48593661f (image=quay.io/ceph/ceph:v19, name=elegant_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 11 09:17:18 compute-0 podman[95583]: 2025-12-11 09:17:18.292190835 +0000 UTC m=+0.420079752 container start e77c7a326c9cfba15418387cdbb5304f5e83bbf8d9c3a3817b6fbfb48593661f (image=quay.io/ceph/ceph:v19, name=elegant_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Dec 11 09:17:18 compute-0 podman[95583]: 2025-12-11 09:17:18.295972192 +0000 UTC m=+0.423861139 container attach e77c7a326c9cfba15418387cdbb5304f5e83bbf8d9c3a3817b6fbfb48593661f (image=quay.io/ceph/ceph:v19, name=elegant_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 11 09:17:18 compute-0 systemd[1]: Reloading.
Dec 11 09:17:18 compute-0 elegant_mcclintock[95616]: ERROR: invalid flag --daemon-type
Dec 11 09:17:18 compute-0 systemd-rc-local-generator[95708]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:17:18 compute-0 systemd-sysv-generator[95712]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:17:18 compute-0 podman[95678]: 2025-12-11 09:17:18.416961228 +0000 UTC m=+0.037954639 container died e77c7a326c9cfba15418387cdbb5304f5e83bbf8d9c3a3817b6fbfb48593661f (image=quay.io/ceph/ceph:v19, name=elegant_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:17:18 compute-0 systemd[1]: libpod-e77c7a326c9cfba15418387cdbb5304f5e83bbf8d9c3a3817b6fbfb48593661f.scope: Deactivated successfully.
Dec 11 09:17:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa70c31b4567c6303aa641c0194bea0c4b43029f6a13863d4d18f1f9346c6e45-merged.mount: Deactivated successfully.
Dec 11 09:17:18 compute-0 systemd[1]: Starting Ceph keepalived.nfs.cephfs.compute-0.ewssxv for 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060...
Dec 11 09:17:18 compute-0 podman[95678]: 2025-12-11 09:17:18.637811623 +0000 UTC m=+0.258805004 container remove e77c7a326c9cfba15418387cdbb5304f5e83bbf8d9c3a3817b6fbfb48593661f (image=quay.io/ceph/ceph:v19, name=elegant_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 11 09:17:18 compute-0 systemd[1]: libpod-conmon-e77c7a326c9cfba15418387cdbb5304f5e83bbf8d9c3a3817b6fbfb48593661f.scope: Deactivated successfully.
Dec 11 09:17:18 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:18 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef58002cb0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:18 compute-0 sudo[95565]: pam_unix(sudo:session): session closed for user root
Dec 11 09:17:18 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 8.e scrub starts
Dec 11 09:17:18 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 8.e scrub ok
Dec 11 09:17:18 compute-0 podman[95770]: 2025-12-11 09:17:18.868710682 +0000 UTC m=+0.043197052 container create 2552e27573042b86043f6eb7d85be1b045b99c64a99b48fb64027534fb48d8b4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-keepalived-nfs-cephfs-compute-0-ewssxv, description=keepalived for Ceph, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.tags=Ceph keepalived, name=keepalived, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20)
Dec 11 09:17:18 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v72: 353 pgs: 353 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s rd, 1.2 KiB/s wr, 105 op/s; 622 B/s, 23 objects/s recovering
Dec 11 09:17:18 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0)
Dec 11 09:17:18 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Dec 11 09:17:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc4bce1d9bc77308957f2ee6302155966316a3e88dd25cd3e2bac78fa73a06de/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:17:18 compute-0 podman[95770]: 2025-12-11 09:17:18.926724462 +0000 UTC m=+0.101210842 container init 2552e27573042b86043f6eb7d85be1b045b99c64a99b48fb64027534fb48d8b4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-keepalived-nfs-cephfs-compute-0-ewssxv, description=keepalived for Ceph, io.openshift.expose-services=, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, distribution-scope=public, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, vendor=Red Hat, Inc., version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64)
Dec 11 09:17:18 compute-0 podman[95770]: 2025-12-11 09:17:18.933053749 +0000 UTC m=+0.107540109 container start 2552e27573042b86043f6eb7d85be1b045b99c64a99b48fb64027534fb48d8b4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-keepalived-nfs-cephfs-compute-0-ewssxv, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, release=1793, io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Dec 11 09:17:18 compute-0 bash[95770]: 2552e27573042b86043f6eb7d85be1b045b99c64a99b48fb64027534fb48d8b4
Dec 11 09:17:18 compute-0 podman[95770]: 2025-12-11 09:17:18.849902828 +0000 UTC m=+0.024389208 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Dec 11 09:17:18 compute-0 systemd[1]: Started Ceph keepalived.nfs.cephfs.compute-0.ewssxv for 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060.
Dec 11 09:17:18 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-keepalived-nfs-cephfs-compute-0-ewssxv[95785]: Thu Dec 11 09:17:18 2025: Starting Keepalived v2.2.4 (08/21,2021)
Dec 11 09:17:18 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-keepalived-nfs-cephfs-compute-0-ewssxv[95785]: Thu Dec 11 09:17:18 2025: Running on Linux 5.14.0-648.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Dec 5 11:18:23 UTC 2025 (built for Linux 5.14.0)
Dec 11 09:17:18 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-keepalived-nfs-cephfs-compute-0-ewssxv[95785]: Thu Dec 11 09:17:18 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Dec 11 09:17:18 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-keepalived-nfs-cephfs-compute-0-ewssxv[95785]: Thu Dec 11 09:17:18 2025: Configuration file /etc/keepalived/keepalived.conf
Dec 11 09:17:18 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-keepalived-nfs-cephfs-compute-0-ewssxv[95785]: Thu Dec 11 09:17:18 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Dec 11 09:17:18 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-keepalived-nfs-cephfs-compute-0-ewssxv[95785]: Thu Dec 11 09:17:18 2025: Starting VRRP child process, pid=4
Dec 11 09:17:18 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-keepalived-nfs-cephfs-compute-0-ewssxv[95785]: Thu Dec 11 09:17:18 2025: Startup complete
Dec 11 09:17:18 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-keepalived-nfs-cephfs-compute-0-ewssxv[95785]: Thu Dec 11 09:17:18 2025: (VI_0) Entering BACKUP STATE (init)
Dec 11 09:17:18 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-keepalived-nfs-cephfs-compute-0-ewssxv[95785]: Thu Dec 11 09:17:18 2025: VRRP_Script(check_backend) succeeded
Dec 11 09:17:18 compute-0 sudo[95391]: pam_unix(sudo:session): session closed for user root
Dec 11 09:17:18 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 11 09:17:19 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:18 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef54004050 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:19 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e71 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:17:19 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:19 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 11 09:17:19 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:19 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec 11 09:17:19 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:19 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec 11 09:17:19 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec 11 09:17:19 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 11 09:17:19 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 11 09:17:19 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 11 09:17:19 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 11 09:17:19 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-1.aigyat on compute-1
Dec 11 09:17:19 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-1.aigyat on compute-1
Dec 11 09:17:19 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Dec 11 09:17:19 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Dec 11 09:17:19 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Dec 11 09:17:19 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Dec 11 09:17:19 compute-0 ceph-mon[74426]: 10.d scrub starts
Dec 11 09:17:19 compute-0 ceph-mon[74426]: 10.d scrub ok
Dec 11 09:17:19 compute-0 ceph-mon[74426]: 12.15 scrub starts
Dec 11 09:17:19 compute-0 ceph-mon[74426]: 12.15 scrub ok
Dec 11 09:17:19 compute-0 ceph-mon[74426]: 8.e scrub starts
Dec 11 09:17:19 compute-0 ceph-mon[74426]: 8.e scrub ok
Dec 11 09:17:19 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Dec 11 09:17:19 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:19 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:19 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:19 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:19 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef60002b70 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:19 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 9.c scrub starts
Dec 11 09:17:19 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 9.c scrub ok
Dec 11 09:17:20 compute-0 ceph-mon[74426]: 12.3 scrub starts
Dec 11 09:17:20 compute-0 ceph-mon[74426]: 12.3 scrub ok
Dec 11 09:17:20 compute-0 ceph-mon[74426]: 12.f scrub starts
Dec 11 09:17:20 compute-0 ceph-mon[74426]: 12.f scrub ok
Dec 11 09:17:20 compute-0 ceph-mon[74426]: pgmap v72: 353 pgs: 353 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s rd, 1.2 KiB/s wr, 105 op/s; 622 B/s, 23 objects/s recovering
Dec 11 09:17:20 compute-0 ceph-mon[74426]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec 11 09:17:20 compute-0 ceph-mon[74426]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 11 09:17:20 compute-0 ceph-mon[74426]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 11 09:17:20 compute-0 ceph-mon[74426]: Deploying daemon keepalived.nfs.cephfs.compute-1.aigyat on compute-1
Dec 11 09:17:20 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Dec 11 09:17:20 compute-0 ceph-mon[74426]: osdmap e72: 3 total, 3 up, 3 in
Dec 11 09:17:20 compute-0 ceph-mon[74426]: 9.c scrub starts
Dec 11 09:17:20 compute-0 ceph-mon[74426]: 12.d scrub starts
Dec 11 09:17:20 compute-0 ceph-mon[74426]: 9.c scrub ok
Dec 11 09:17:20 compute-0 ceph-mon[74426]: 12.d scrub ok
Dec 11 09:17:20 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:20 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef68002910 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:20 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Dec 11 09:17:20 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Dec 11 09:17:20 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v74: 353 pgs: 353 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 1.0 KiB/s wr, 87 op/s; 516 B/s, 19 objects/s recovering
Dec 11 09:17:20 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0)
Dec 11 09:17:20 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Dec 11 09:17:21 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:21 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef58003db0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:21 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:21 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef54004050 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:21 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Dec 11 09:17:21 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Dec 11 09:17:21 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Dec 11 09:17:21 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Dec 11 09:17:21 compute-0 ceph-mon[74426]: 12.2 scrub starts
Dec 11 09:17:21 compute-0 ceph-mon[74426]: 12.2 scrub ok
Dec 11 09:17:21 compute-0 ceph-mon[74426]: 8.6 scrub starts
Dec 11 09:17:21 compute-0 ceph-mon[74426]: 8.6 scrub ok
Dec 11 09:17:21 compute-0 ceph-mon[74426]: 11.2 scrub starts
Dec 11 09:17:21 compute-0 ceph-mon[74426]: 11.2 scrub ok
Dec 11 09:17:21 compute-0 ceph-mon[74426]: 12.5 scrub starts
Dec 11 09:17:21 compute-0 ceph-mon[74426]: 12.5 scrub ok
Dec 11 09:17:21 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Dec 11 09:17:21 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Dec 11 09:17:21 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Dec 11 09:17:22 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Dec 11 09:17:22 compute-0 ceph-mon[74426]: pgmap v74: 353 pgs: 353 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 1.0 KiB/s wr, 87 op/s; 516 B/s, 19 objects/s recovering
Dec 11 09:17:22 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Dec 11 09:17:22 compute-0 ceph-mon[74426]: osdmap e73: 3 total, 3 up, 3 in
Dec 11 09:17:22 compute-0 ceph-mon[74426]: 8.1 scrub starts
Dec 11 09:17:22 compute-0 ceph-mon[74426]: 8.1 scrub ok
Dec 11 09:17:22 compute-0 ceph-mon[74426]: 8.1f scrub starts
Dec 11 09:17:22 compute-0 ceph-mon[74426]: 8.1f scrub ok
Dec 11 09:17:22 compute-0 ceph-mon[74426]: 12.0 scrub starts
Dec 11 09:17:22 compute-0 ceph-mon[74426]: 12.0 scrub ok
Dec 11 09:17:22 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Dec 11 09:17:22 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Dec 11 09:17:22 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-keepalived-nfs-cephfs-compute-0-ewssxv[95785]: Thu Dec 11 09:17:22 2025: (VI_0) Entering MASTER STATE
Dec 11 09:17:22 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:22 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef60002b70 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:22 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 9.0 scrub starts
Dec 11 09:17:22 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 9.0 scrub ok
Dec 11 09:17:22 compute-0 ceph-mgr[74715]: [progress INFO root] Completed event c8c486e5-74fd-4d9b-9338-0db5c79a1e14 (Global Recovery Event) in 5 seconds
Dec 11 09:17:22 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v77: 353 pgs: 353 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 1.1 KiB/s wr, 89 op/s; 527 B/s, 20 objects/s recovering
Dec 11 09:17:22 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0)
Dec 11 09:17:22 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Dec 11 09:17:23 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:23 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef68002910 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:23 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:23 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef58003db0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:23 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Dec 11 09:17:23 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Dec 11 09:17:23 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Dec 11 09:17:23 compute-0 ceph-mon[74426]: osdmap e74: 3 total, 3 up, 3 in
Dec 11 09:17:23 compute-0 ceph-mon[74426]: 9.0 scrub starts
Dec 11 09:17:23 compute-0 ceph-mon[74426]: 9.0 scrub ok
Dec 11 09:17:23 compute-0 ceph-mon[74426]: 11.8 scrub starts
Dec 11 09:17:23 compute-0 ceph-mon[74426]: 11.8 scrub ok
Dec 11 09:17:23 compute-0 ceph-mon[74426]: 12.1f scrub starts
Dec 11 09:17:23 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Dec 11 09:17:23 compute-0 ceph-mon[74426]: 12.1f scrub ok
Dec 11 09:17:23 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Dec 11 09:17:23 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 75 pg[10.d( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=68/68 les/c/f=69/69/0 sis=75) [1] r=0 lpr=75 pi=[68,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:23 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 75 pg[10.15( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=68/68 les/c/f=69/69/0 sis=75) [1] r=0 lpr=75 pi=[68,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:23 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 75 pg[10.1d( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=68/68 les/c/f=69/69/0 sis=75) [1] r=0 lpr=75 pi=[68,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:23 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 75 pg[10.5( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=68/68 les/c/f=69/69/0 sis=75) [1] r=0 lpr=75 pi=[68,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:23 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Dec 11 09:17:23 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Dec 11 09:17:24 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e75 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:17:24 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Dec 11 09:17:24 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Dec 11 09:17:24 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Dec 11 09:17:24 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 76 pg[10.15( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=68/68 les/c/f=69/69/0 sis=76) [1]/[2] r=-1 lpr=76 pi=[68,76)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:24 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 76 pg[10.15( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=68/68 les/c/f=69/69/0 sis=76) [1]/[2] r=-1 lpr=76 pi=[68,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:24 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 76 pg[10.d( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=68/68 les/c/f=69/69/0 sis=76) [1]/[2] r=-1 lpr=76 pi=[68,76)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:24 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 76 pg[10.d( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=68/68 les/c/f=69/69/0 sis=76) [1]/[2] r=-1 lpr=76 pi=[68,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:24 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 76 pg[10.5( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=68/68 les/c/f=69/69/0 sis=76) [1]/[2] r=-1 lpr=76 pi=[68,76)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:24 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 76 pg[10.5( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=68/68 les/c/f=69/69/0 sis=76) [1]/[2] r=-1 lpr=76 pi=[68,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:24 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 76 pg[10.1d( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=68/68 les/c/f=69/69/0 sis=76) [1]/[2] r=-1 lpr=76 pi=[68,76)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:24 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 76 pg[10.1d( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=68/68 les/c/f=69/69/0 sis=76) [1]/[2] r=-1 lpr=76 pi=[68,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:24 compute-0 ceph-mon[74426]: pgmap v77: 353 pgs: 353 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 1.1 KiB/s wr, 89 op/s; 527 B/s, 20 objects/s recovering
Dec 11 09:17:24 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Dec 11 09:17:24 compute-0 ceph-mon[74426]: osdmap e75: 3 total, 3 up, 3 in
Dec 11 09:17:24 compute-0 ceph-mon[74426]: 9.1 scrub starts
Dec 11 09:17:24 compute-0 ceph-mon[74426]: 9.1 scrub ok
Dec 11 09:17:24 compute-0 ceph-mon[74426]: osdmap e76: 3 total, 3 up, 3 in
Dec 11 09:17:24 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:24 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef54004050 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:24 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 8.0 scrub starts
Dec 11 09:17:24 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 8.0 scrub ok
Dec 11 09:17:24 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:24 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 11 09:17:24 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v80: 353 pgs: 4 remapped+peering, 349 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:17:25 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:25 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef60003490 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:25 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Dec 11 09:17:25 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Dec 11 09:17:25 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Dec 11 09:17:25 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:25 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef4c000b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:25 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Dec 11 09:17:25 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Dec 11 09:17:26 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Dec 11 09:17:26 compute-0 ceph-mon[74426]: 8.0 scrub starts
Dec 11 09:17:26 compute-0 ceph-mon[74426]: 8.0 scrub ok
Dec 11 09:17:26 compute-0 ceph-mon[74426]: pgmap v80: 353 pgs: 4 remapped+peering, 349 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:17:26 compute-0 ceph-mon[74426]: osdmap e77: 3 total, 3 up, 3 in
Dec 11 09:17:26 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Dec 11 09:17:26 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Dec 11 09:17:26 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 78 pg[10.5( v 77'1103 (0'0,77'1103] local-lis/les=0/0 n=6 ec=61/50 lis/c=76/68 les/c/f=77/69/0 sis=78) [1] r=0 lpr=78 pi=[68,78)/1 luod=0'0 crt=71'1098 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:26 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 78 pg[10.1d( v 56'1088 (0'0,56'1088] local-lis/les=0/0 n=5 ec=61/50 lis/c=76/68 les/c/f=77/69/0 sis=78) [1] r=0 lpr=78 pi=[68,78)/1 luod=0'0 crt=56'1088 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:26 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 78 pg[10.5( v 77'1103 (0'0,77'1103] local-lis/les=0/0 n=6 ec=61/50 lis/c=76/68 les/c/f=77/69/0 sis=78) [1] r=0 lpr=78 pi=[68,78)/1 crt=71'1098 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:26 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 78 pg[10.1d( v 56'1088 (0'0,56'1088] local-lis/les=0/0 n=5 ec=61/50 lis/c=76/68 les/c/f=77/69/0 sis=78) [1] r=0 lpr=78 pi=[68,78)/1 crt=56'1088 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:26 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 78 pg[10.d( v 56'1088 (0'0,56'1088] local-lis/les=0/0 n=8 ec=61/50 lis/c=76/68 les/c/f=77/69/0 sis=78) [1] r=0 lpr=78 pi=[68,78)/1 luod=0'0 crt=56'1088 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:26 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 78 pg[10.d( v 56'1088 (0'0,56'1088] local-lis/les=0/0 n=8 ec=61/50 lis/c=76/68 les/c/f=77/69/0 sis=78) [1] r=0 lpr=78 pi=[68,78)/1 crt=56'1088 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:26 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 78 pg[10.15( v 56'1088 (0'0,56'1088] local-lis/les=0/0 n=5 ec=61/50 lis/c=76/68 les/c/f=77/69/0 sis=78) [1] r=0 lpr=78 pi=[68,78)/1 luod=0'0 crt=56'1088 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:26 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 78 pg[10.15( v 56'1088 (0'0,56'1088] local-lis/les=0/0 n=5 ec=61/50 lis/c=76/68 les/c/f=77/69/0 sis=78) [1] r=0 lpr=78 pi=[68,78)/1 crt=56'1088 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:26 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 11 09:17:26 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:26 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 11 09:17:26 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:26 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec 11 09:17:26 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:26 compute-0 ceph-mgr[74715]: [progress INFO root] complete: finished ev b599dccd-0ddb-4fbf-81b9-571c51e9910e (Updating ingress.nfs.cephfs deployment (+6 -> 6))
Dec 11 09:17:26 compute-0 ceph-mgr[74715]: [progress INFO root] Completed event b599dccd-0ddb-4fbf-81b9-571c51e9910e (Updating ingress.nfs.cephfs deployment (+6 -> 6)) in 40 seconds
Dec 11 09:17:26 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec 11 09:17:26 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:26 compute-0 ceph-mgr[74715]: [progress INFO root] update: starting ev 8f0fe8b0-6f41-46a4-ae80-5ffaf388eb49 (Updating alertmanager deployment (+1 -> 1))
Dec 11 09:17:26 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Deploying daemon alertmanager.compute-0 on compute-0
Dec 11 09:17:26 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Deploying daemon alertmanager.compute-0 on compute-0
Dec 11 09:17:26 compute-0 sudo[95798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:17:26 compute-0 sudo[95798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:17:26 compute-0 sudo[95798]: pam_unix(sudo:session): session closed for user root
Dec 11 09:17:26 compute-0 sudo[95823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/alertmanager:v0.25.0 --timeout 895 _orch deploy --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:17:26 compute-0 sudo[95823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:17:26 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:26 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef70001110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:26 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Dec 11 09:17:26 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Dec 11 09:17:26 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v83: 353 pgs: 4 remapped+peering, 349 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:17:27 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:27 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef54004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:27 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] scanning for idle connections..
Dec 11 09:17:27 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] cleaning up connections: []
Dec 11 09:17:27 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] scanning for idle connections..
Dec 11 09:17:27 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] cleaning up connections: []
Dec 11 09:17:27 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] scanning for idle connections..
Dec 11 09:17:27 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] cleaning up connections: []
Dec 11 09:17:27 compute-0 ceph-mon[74426]: 8.7 scrub starts
Dec 11 09:17:27 compute-0 ceph-mon[74426]: 8.7 scrub ok
Dec 11 09:17:27 compute-0 ceph-mon[74426]: 10.4 deep-scrub starts
Dec 11 09:17:27 compute-0 ceph-mon[74426]: 10.4 deep-scrub ok
Dec 11 09:17:27 compute-0 ceph-mon[74426]: osdmap e78: 3 total, 3 up, 3 in
Dec 11 09:17:27 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:27 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:27 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:27 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:27 compute-0 ceph-mon[74426]: Deploying daemon alertmanager.compute-0 on compute-0
Dec 11 09:17:27 compute-0 ceph-mon[74426]: 11.6 scrub starts
Dec 11 09:17:27 compute-0 ceph-mon[74426]: 11.6 scrub ok
Dec 11 09:17:27 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Dec 11 09:17:27 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Dec 11 09:17:27 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Dec 11 09:17:27 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 79 pg[10.15( v 56'1088 (0'0,56'1088] local-lis/les=78/79 n=5 ec=61/50 lis/c=76/68 les/c/f=77/69/0 sis=78) [1] r=0 lpr=78 pi=[68,78)/1 crt=56'1088 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:27 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 79 pg[10.d( v 56'1088 (0'0,56'1088] local-lis/les=78/79 n=8 ec=61/50 lis/c=76/68 les/c/f=77/69/0 sis=78) [1] r=0 lpr=78 pi=[68,78)/1 crt=56'1088 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:27 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 79 pg[10.1d( v 56'1088 (0'0,56'1088] local-lis/les=78/79 n=5 ec=61/50 lis/c=76/68 les/c/f=77/69/0 sis=78) [1] r=0 lpr=78 pi=[68,78)/1 crt=56'1088 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:27 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 79 pg[10.5( v 77'1103 (0'0,77'1103] local-lis/les=78/79 n=6 ec=61/50 lis/c=76/68 les/c/f=77/69/0 sis=78) [1] r=0 lpr=78 pi=[68,78)/1 crt=77'1103 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:27 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:27 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef60003490 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:27 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Dec 11 09:17:27 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Dec 11 09:17:27 compute-0 ceph-mgr[74715]: [progress INFO root] Writing back 26 completed events
Dec 11 09:17:27 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 11 09:17:27 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:27 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 11 09:17:27 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:27 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 11 09:17:28 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:28 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 11 09:17:28 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:28 compute-0 ceph-mgr[74715]: [progress WARNING root] Starting Global Recovery Event,4 pgs not in active + clean state
Dec 11 09:17:28 compute-0 ceph-mon[74426]: pgmap v83: 353 pgs: 4 remapped+peering, 349 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:17:28 compute-0 ceph-mon[74426]: osdmap e79: 3 total, 3 up, 3 in
Dec 11 09:17:28 compute-0 ceph-mon[74426]: 9.4 scrub starts
Dec 11 09:17:28 compute-0 ceph-mon[74426]: 9.4 scrub ok
Dec 11 09:17:28 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:28 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef4c0016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:28 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 11.18 deep-scrub starts
Dec 11 09:17:28 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 11.18 deep-scrub ok
Dec 11 09:17:28 compute-0 sudo[95978]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvforgamuijdrkhpmiqlzcdfcjlygpdy ; /usr/bin/python3'
Dec 11 09:17:28 compute-0 sudo[95978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:17:28 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v85: 353 pgs: 353 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 147 B/s, 8 objects/s recovering
Dec 11 09:17:28 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0)
Dec 11 09:17:28 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Dec 11 09:17:28 compute-0 python3[95980]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps --daemon-type rgw --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:17:29 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:29 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef70001110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:29 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e79 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:17:29 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:29 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef54004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:29 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Dec 11 09:17:29 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:29 compute-0 ceph-mon[74426]: 11.18 deep-scrub starts
Dec 11 09:17:29 compute-0 ceph-mon[74426]: 11.18 deep-scrub ok
Dec 11 09:17:29 compute-0 ceph-mon[74426]: 8.b deep-scrub starts
Dec 11 09:17:29 compute-0 ceph-mon[74426]: 8.b deep-scrub ok
Dec 11 09:17:29 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Dec 11 09:17:29 compute-0 ceph-mon[74426]: 12.1b scrub starts
Dec 11 09:17:29 compute-0 ceph-mon[74426]: 12.1b scrub ok
Dec 11 09:17:29 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Dec 11 09:17:29 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Dec 11 09:17:29 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Dec 11 09:17:29 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Dec 11 09:17:29 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Dec 11 09:17:29 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-keepalived-nfs-cephfs-compute-0-ewssxv[95785]: Thu Dec 11 09:17:29 2025: (VI_0) Received advert from 192.168.122.101 with lower priority 90, ours 100, forcing new election
Dec 11 09:17:30 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 80 pg[10.16( v 56'1088 (0'0,56'1088] local-lis/les=70/71 n=4 ec=61/50 lis/c=70/70 les/c/f=71/71/0 sis=80 pruub=10.767364502s) [0] r=-1 lpr=80 pi=[70,80)/1 crt=56'1088 mlcod 0'0 active pruub 202.338684082s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:30 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 80 pg[10.16( v 56'1088 (0'0,56'1088] local-lis/les=70/71 n=4 ec=61/50 lis/c=70/70 les/c/f=71/71/0 sis=80 pruub=10.767309189s) [0] r=-1 lpr=80 pi=[70,80)/1 crt=56'1088 mlcod 0'0 unknown NOTIFY pruub 202.338684082s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:30 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 80 pg[10.e( v 56'1088 (0'0,56'1088] local-lis/les=70/71 n=6 ec=61/50 lis/c=70/70 les/c/f=71/71/0 sis=80 pruub=10.766874313s) [0] r=-1 lpr=80 pi=[70,80)/1 crt=56'1088 mlcod 0'0 active pruub 202.338729858s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:30 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 80 pg[10.e( v 56'1088 (0'0,56'1088] local-lis/les=70/71 n=6 ec=61/50 lis/c=70/70 les/c/f=71/71/0 sis=80 pruub=10.766844749s) [0] r=-1 lpr=80 pi=[70,80)/1 crt=56'1088 mlcod 0'0 unknown NOTIFY pruub 202.338729858s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:30 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 80 pg[10.6( v 56'1088 (0'0,56'1088] local-lis/les=70/71 n=6 ec=61/50 lis/c=70/70 les/c/f=71/71/0 sis=80 pruub=10.766448021s) [0] r=-1 lpr=80 pi=[70,80)/1 crt=56'1088 mlcod 0'0 active pruub 202.338775635s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:30 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 80 pg[10.6( v 56'1088 (0'0,56'1088] local-lis/les=70/71 n=6 ec=61/50 lis/c=70/70 les/c/f=71/71/0 sis=80 pruub=10.766408920s) [0] r=-1 lpr=80 pi=[70,80)/1 crt=56'1088 mlcod 0'0 unknown NOTIFY pruub 202.338775635s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:30 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 80 pg[10.1e( v 56'1088 (0'0,56'1088] local-lis/les=70/71 n=5 ec=61/50 lis/c=70/70 les/c/f=71/71/0 sis=80 pruub=10.766353607s) [0] r=-1 lpr=80 pi=[70,80)/1 crt=56'1088 mlcod 0'0 active pruub 202.338943481s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:30 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 80 pg[10.1e( v 56'1088 (0'0,56'1088] local-lis/les=70/71 n=5 ec=61/50 lis/c=70/70 les/c/f=71/71/0 sis=80 pruub=10.766327858s) [0] r=-1 lpr=80 pi=[70,80)/1 crt=56'1088 mlcod 0'0 unknown NOTIFY pruub 202.338943481s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:30 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:30 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef60003490 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:30 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Dec 11 09:17:30 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Dec 11 09:17:30 compute-0 ceph-mon[74426]: pgmap v85: 353 pgs: 353 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 147 B/s, 8 objects/s recovering
Dec 11 09:17:30 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Dec 11 09:17:30 compute-0 ceph-mon[74426]: osdmap e80: 3 total, 3 up, 3 in
Dec 11 09:17:30 compute-0 ceph-mon[74426]: 9.1a scrub starts
Dec 11 09:17:30 compute-0 ceph-mon[74426]: 9.1a scrub ok
Dec 11 09:17:30 compute-0 ceph-mon[74426]: 12.11 deep-scrub starts
Dec 11 09:17:30 compute-0 ceph-mon[74426]: 12.11 deep-scrub ok
Dec 11 09:17:30 compute-0 ceph-mon[74426]: 12.16 scrub starts
Dec 11 09:17:30 compute-0 ceph-mon[74426]: 12.16 scrub ok
Dec 11 09:17:30 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Dec 11 09:17:30 compute-0 podman[95981]: 2025-12-11 09:17:30.891966807 +0000 UTC m=+1.930273111 container create e5ac1894e40bbf12fe667f99df891eb89d0e4f903e6021fbb28fefdc224955d3 (image=quay.io/ceph/ceph:v19, name=clever_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec 11 09:17:30 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Dec 11 09:17:30 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v88: 353 pgs: 353 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 150 B/s, 8 objects/s recovering
Dec 11 09:17:30 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Dec 11 09:17:30 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0)
Dec 11 09:17:30 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Dec 11 09:17:30 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 81 pg[10.16( v 56'1088 (0'0,56'1088] local-lis/les=70/71 n=4 ec=61/50 lis/c=70/70 les/c/f=71/71/0 sis=81) [0]/[1] r=0 lpr=81 pi=[70,81)/1 crt=56'1088 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:30 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 81 pg[10.16( v 56'1088 (0'0,56'1088] local-lis/les=70/71 n=4 ec=61/50 lis/c=70/70 les/c/f=71/71/0 sis=81) [0]/[1] r=0 lpr=81 pi=[70,81)/1 crt=56'1088 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:30 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 81 pg[10.e( v 56'1088 (0'0,56'1088] local-lis/les=70/71 n=6 ec=61/50 lis/c=70/70 les/c/f=71/71/0 sis=81) [0]/[1] r=0 lpr=81 pi=[70,81)/1 crt=56'1088 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:30 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 81 pg[10.e( v 56'1088 (0'0,56'1088] local-lis/les=70/71 n=6 ec=61/50 lis/c=70/70 les/c/f=71/71/0 sis=81) [0]/[1] r=0 lpr=81 pi=[70,81)/1 crt=56'1088 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:30 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 81 pg[10.6( v 56'1088 (0'0,56'1088] local-lis/les=70/71 n=6 ec=61/50 lis/c=70/70 les/c/f=71/71/0 sis=81) [0]/[1] r=0 lpr=81 pi=[70,81)/1 crt=56'1088 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:30 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 81 pg[10.6( v 56'1088 (0'0,56'1088] local-lis/les=70/71 n=6 ec=61/50 lis/c=70/70 les/c/f=71/71/0 sis=81) [0]/[1] r=0 lpr=81 pi=[70,81)/1 crt=56'1088 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:30 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 81 pg[10.1e( v 56'1088 (0'0,56'1088] local-lis/les=70/71 n=5 ec=61/50 lis/c=70/70 les/c/f=71/71/0 sis=81) [0]/[1] r=0 lpr=81 pi=[70,81)/1 crt=56'1088 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:30 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 81 pg[10.1e( v 56'1088 (0'0,56'1088] local-lis/les=70/71 n=5 ec=61/50 lis/c=70/70 les/c/f=71/71/0 sis=81) [0]/[1] r=0 lpr=81 pi=[70,81)/1 crt=56'1088 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:30 compute-0 podman[95981]: 2025-12-11 09:17:30.849803388 +0000 UTC m=+1.888109722 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:17:30 compute-0 podman[95888]: 2025-12-11 09:17:30.945704656 +0000 UTC m=+3.925922873 volume create 139530ecd0a6694c9accb31c5b77b45e63e023e1c7c72bc7fd1b77373b12be73
Dec 11 09:17:30 compute-0 podman[95888]: 2025-12-11 09:17:30.95357507 +0000 UTC m=+3.933793297 container create 2fa8dbe992d37289966b4e5550aa123db587b932040891b7fea477b4bf2c172f (image=quay.io/prometheus/alertmanager:v0.25.0, name=amazing_heisenberg, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 09:17:30 compute-0 podman[95888]: 2025-12-11 09:17:30.931957589 +0000 UTC m=+3.912175836 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Dec 11 09:17:30 compute-0 systemd[1]: Started libpod-conmon-e5ac1894e40bbf12fe667f99df891eb89d0e4f903e6021fbb28fefdc224955d3.scope.
Dec 11 09:17:30 compute-0 systemd[1]: Started libpod-conmon-2fa8dbe992d37289966b4e5550aa123db587b932040891b7fea477b4bf2c172f.scope.
Dec 11 09:17:30 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:17:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/032da101a0006ac3ec33d87cc914bc4d8611839ee0eb8bcdaed08c65d0e279c0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:17:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/032da101a0006ac3ec33d87cc914bc4d8611839ee0eb8bcdaed08c65d0e279c0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:17:31 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:17:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/064438cecf3c2332f9a25e490dc99d8df91ac49d87a5385370e5c6d9180b78bf/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Dec 11 09:17:31 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:31 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef4c0016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:31 compute-0 podman[95981]: 2025-12-11 09:17:31.018610219 +0000 UTC m=+2.056916553 container init e5ac1894e40bbf12fe667f99df891eb89d0e4f903e6021fbb28fefdc224955d3 (image=quay.io/ceph/ceph:v19, name=clever_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Dec 11 09:17:31 compute-0 podman[95888]: 2025-12-11 09:17:31.021251181 +0000 UTC m=+4.001469408 container init 2fa8dbe992d37289966b4e5550aa123db587b932040891b7fea477b4bf2c172f (image=quay.io/prometheus/alertmanager:v0.25.0, name=amazing_heisenberg, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 09:17:31 compute-0 podman[95981]: 2025-12-11 09:17:31.028511426 +0000 UTC m=+2.066817740 container start e5ac1894e40bbf12fe667f99df891eb89d0e4f903e6021fbb28fefdc224955d3 (image=quay.io/ceph/ceph:v19, name=clever_dewdney, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0)
Dec 11 09:17:31 compute-0 podman[95888]: 2025-12-11 09:17:31.02898095 +0000 UTC m=+4.009199177 container start 2fa8dbe992d37289966b4e5550aa123db587b932040891b7fea477b4bf2c172f (image=quay.io/prometheus/alertmanager:v0.25.0, name=amazing_heisenberg, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 09:17:31 compute-0 amazing_heisenberg[96067]: 65534 65534
Dec 11 09:17:31 compute-0 systemd[1]: libpod-2fa8dbe992d37289966b4e5550aa123db587b932040891b7fea477b4bf2c172f.scope: Deactivated successfully.
Dec 11 09:17:31 compute-0 conmon[96067]: conmon 2fa8dbe992d37289966b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2fa8dbe992d37289966b4e5550aa123db587b932040891b7fea477b4bf2c172f.scope/container/memory.events
Dec 11 09:17:31 compute-0 podman[95981]: 2025-12-11 09:17:31.032869652 +0000 UTC m=+2.071175996 container attach e5ac1894e40bbf12fe667f99df891eb89d0e4f903e6021fbb28fefdc224955d3 (image=quay.io/ceph/ceph:v19, name=clever_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec 11 09:17:31 compute-0 clever_dewdney[96063]: ERROR: invalid flag --daemon-type
Dec 11 09:17:31 compute-0 systemd[1]: libpod-e5ac1894e40bbf12fe667f99df891eb89d0e4f903e6021fbb28fefdc224955d3.scope: Deactivated successfully.
Dec 11 09:17:31 compute-0 podman[95888]: 2025-12-11 09:17:31.104911138 +0000 UTC m=+4.085129365 container attach 2fa8dbe992d37289966b4e5550aa123db587b932040891b7fea477b4bf2c172f (image=quay.io/prometheus/alertmanager:v0.25.0, name=amazing_heisenberg, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 09:17:31 compute-0 podman[95888]: 2025-12-11 09:17:31.105433634 +0000 UTC m=+4.085651881 container died 2fa8dbe992d37289966b4e5550aa123db587b932040891b7fea477b4bf2c172f (image=quay.io/prometheus/alertmanager:v0.25.0, name=amazing_heisenberg, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 09:17:31 compute-0 podman[95981]: 2025-12-11 09:17:31.128578543 +0000 UTC m=+2.166884857 container died e5ac1894e40bbf12fe667f99df891eb89d0e4f903e6021fbb28fefdc224955d3 (image=quay.io/ceph/ceph:v19, name=clever_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec 11 09:17:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-064438cecf3c2332f9a25e490dc99d8df91ac49d87a5385370e5c6d9180b78bf-merged.mount: Deactivated successfully.
Dec 11 09:17:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-032da101a0006ac3ec33d87cc914bc4d8611839ee0eb8bcdaed08c65d0e279c0-merged.mount: Deactivated successfully.
Dec 11 09:17:31 compute-0 podman[95888]: 2025-12-11 09:17:31.21420959 +0000 UTC m=+4.194427817 container remove 2fa8dbe992d37289966b4e5550aa123db587b932040891b7fea477b4bf2c172f (image=quay.io/prometheus/alertmanager:v0.25.0, name=amazing_heisenberg, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 09:17:31 compute-0 podman[95888]: 2025-12-11 09:17:31.218226546 +0000 UTC m=+4.198444773 volume remove 139530ecd0a6694c9accb31c5b77b45e63e023e1c7c72bc7fd1b77373b12be73
Dec 11 09:17:31 compute-0 podman[95981]: 2025-12-11 09:17:31.241488917 +0000 UTC m=+2.279795231 container remove e5ac1894e40bbf12fe667f99df891eb89d0e4f903e6021fbb28fefdc224955d3 (image=quay.io/ceph/ceph:v19, name=clever_dewdney, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 11 09:17:31 compute-0 systemd[1]: libpod-conmon-e5ac1894e40bbf12fe667f99df891eb89d0e4f903e6021fbb28fefdc224955d3.scope: Deactivated successfully.
Dec 11 09:17:31 compute-0 systemd[1]: libpod-conmon-2fa8dbe992d37289966b4e5550aa123db587b932040891b7fea477b4bf2c172f.scope: Deactivated successfully.
Dec 11 09:17:31 compute-0 sudo[95978]: pam_unix(sudo:session): session closed for user root
Dec 11 09:17:31 compute-0 podman[96114]: 2025-12-11 09:17:31.282676146 +0000 UTC m=+0.043972366 volume create 679e70b108c13b4e54bb8b4dac0dfe5f86896c5bc43b2f752bdcda7f0c17fc0c
Dec 11 09:17:31 compute-0 podman[96114]: 2025-12-11 09:17:31.291970554 +0000 UTC m=+0.053266764 container create 509a283d977a824be4ad773a53b393cf44f3b38428e9b075bd0d3f44e0aea4d8 (image=quay.io/prometheus/alertmanager:v0.25.0, name=adoring_hertz, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 09:17:31 compute-0 systemd[1]: Started libpod-conmon-509a283d977a824be4ad773a53b393cf44f3b38428e9b075bd0d3f44e0aea4d8.scope.
Dec 11 09:17:31 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:31 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 11 09:17:31 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:17:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19beeb85d8207647d35e4544470e0fe8f0cc11f1d6a54454a996c5bf7dddfa4c/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Dec 11 09:17:31 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:31 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef700022a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:31 compute-0 podman[96114]: 2025-12-11 09:17:31.26604095 +0000 UTC m=+0.027337190 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Dec 11 09:17:31 compute-0 podman[96114]: 2025-12-11 09:17:31.643618631 +0000 UTC m=+0.404914871 container init 509a283d977a824be4ad773a53b393cf44f3b38428e9b075bd0d3f44e0aea4d8 (image=quay.io/prometheus/alertmanager:v0.25.0, name=adoring_hertz, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 09:17:31 compute-0 podman[96114]: 2025-12-11 09:17:31.649818664 +0000 UTC m=+0.411114894 container start 509a283d977a824be4ad773a53b393cf44f3b38428e9b075bd0d3f44e0aea4d8 (image=quay.io/prometheus/alertmanager:v0.25.0, name=adoring_hertz, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 09:17:31 compute-0 adoring_hertz[96129]: 65534 65534
Dec 11 09:17:31 compute-0 systemd[1]: libpod-509a283d977a824be4ad773a53b393cf44f3b38428e9b075bd0d3f44e0aea4d8.scope: Deactivated successfully.
Dec 11 09:17:31 compute-0 conmon[96129]: conmon 509a283d977a824be4ad <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-509a283d977a824be4ad773a53b393cf44f3b38428e9b075bd0d3f44e0aea4d8.scope/container/memory.events
Dec 11 09:17:31 compute-0 podman[96114]: 2025-12-11 09:17:31.68867987 +0000 UTC m=+0.449976090 container attach 509a283d977a824be4ad773a53b393cf44f3b38428e9b075bd0d3f44e0aea4d8 (image=quay.io/prometheus/alertmanager:v0.25.0, name=adoring_hertz, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 09:17:31 compute-0 podman[96114]: 2025-12-11 09:17:31.689587917 +0000 UTC m=+0.450884147 container died 509a283d977a824be4ad773a53b393cf44f3b38428e9b075bd0d3f44e0aea4d8 (image=quay.io/prometheus/alertmanager:v0.25.0, name=adoring_hertz, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 09:17:31 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Dec 11 09:17:31 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Dec 11 09:17:31 compute-0 podman[96114]: 2025-12-11 09:17:31.879959157 +0000 UTC m=+0.641255377 container remove 509a283d977a824be4ad773a53b393cf44f3b38428e9b075bd0d3f44e0aea4d8 (image=quay.io/prometheus/alertmanager:v0.25.0, name=adoring_hertz, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 09:17:31 compute-0 podman[96114]: 2025-12-11 09:17:31.883760685 +0000 UTC m=+0.645056905 volume remove 679e70b108c13b4e54bb8b4dac0dfe5f86896c5bc43b2f752bdcda7f0c17fc0c
Dec 11 09:17:31 compute-0 systemd[1]: libpod-conmon-509a283d977a824be4ad773a53b393cf44f3b38428e9b075bd0d3f44e0aea4d8.scope: Deactivated successfully.
Dec 11 09:17:31 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Dec 11 09:17:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-19beeb85d8207647d35e4544470e0fe8f0cc11f1d6a54454a996c5bf7dddfa4c-merged.mount: Deactivated successfully.
Dec 11 09:17:31 compute-0 systemd[1]: Reloading.
Dec 11 09:17:32 compute-0 systemd-rc-local-generator[96174]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:17:32 compute-0 systemd-sysv-generator[96177]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:17:32 compute-0 systemd[1]: Reloading.
Dec 11 09:17:32 compute-0 systemd-sysv-generator[96219]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:17:32 compute-0 systemd-rc-local-generator[96216]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:17:32 compute-0 systemd[1]: Starting Ceph alertmanager.compute-0 for 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060...
Dec 11 09:17:32 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Dec 11 09:17:32 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Dec 11 09:17:32 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Dec 11 09:17:32 compute-0 ceph-mon[74426]: 9.1b scrub starts
Dec 11 09:17:32 compute-0 ceph-mon[74426]: 9.1b scrub ok
Dec 11 09:17:32 compute-0 ceph-mon[74426]: 12.1e scrub starts
Dec 11 09:17:32 compute-0 ceph-mon[74426]: 12.1e scrub ok
Dec 11 09:17:32 compute-0 ceph-mon[74426]: pgmap v88: 353 pgs: 353 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 150 B/s, 8 objects/s recovering
Dec 11 09:17:32 compute-0 ceph-mon[74426]: osdmap e81: 3 total, 3 up, 3 in
Dec 11 09:17:32 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Dec 11 09:17:32 compute-0 ceph-mon[74426]: 12.14 scrub starts
Dec 11 09:17:32 compute-0 ceph-mon[74426]: 12.14 scrub ok
Dec 11 09:17:32 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 82 pg[10.e( v 56'1088 (0'0,56'1088] local-lis/les=81/82 n=6 ec=61/50 lis/c=70/70 les/c/f=71/71/0 sis=81) [0]/[1] async=[0] r=0 lpr=81 pi=[70,81)/1 crt=56'1088 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:32 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:32 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef54004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:32 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 82 pg[10.6( v 56'1088 (0'0,56'1088] local-lis/les=81/82 n=6 ec=61/50 lis/c=70/70 les/c/f=71/71/0 sis=81) [0]/[1] async=[0] r=0 lpr=81 pi=[70,81)/1 crt=56'1088 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:32 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 82 pg[10.16( v 56'1088 (0'0,56'1088] local-lis/les=81/82 n=4 ec=61/50 lis/c=70/70 les/c/f=71/71/0 sis=81) [0]/[1] async=[0] r=0 lpr=81 pi=[70,81)/1 crt=56'1088 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:32 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 82 pg[10.1e( v 56'1088 (0'0,56'1088] local-lis/les=81/82 n=5 ec=61/50 lis/c=70/70 les/c/f=71/71/0 sis=81) [0]/[1] async=[0] r=0 lpr=81 pi=[70,81)/1 crt=56'1088 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:32 compute-0 podman[96271]: 2025-12-11 09:17:32.716869377 +0000 UTC m=+0.024662346 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Dec 11 09:17:32 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v90: 353 pgs: 353 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 124 B/s, 7 objects/s recovering
Dec 11 09:17:32 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0)
Dec 11 09:17:32 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Dec 11 09:17:32 compute-0 podman[96271]: 2025-12-11 09:17:32.933610535 +0000 UTC m=+0.241403484 volume create d41bc3fc3ebefe39be0c02fbe1b70802d088e539dab7a004d605b30785c14792
Dec 11 09:17:33 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:33 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef60003490 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:33 compute-0 podman[96271]: 2025-12-11 09:17:33.030492082 +0000 UTC m=+0.338285021 container create f6be69c7d77625e49bceb27822a33c1c522509b882e158ee75cfb836ad9761a2 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 09:17:33 compute-0 ceph-mgr[74715]: [progress INFO root] Completed event f03e6e89-8223-4a9b-8452-9518f59052f7 (Global Recovery Event) in 5 seconds
Dec 11 09:17:33 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:33 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef4c0016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b261e1b794c9e24e398799eb3612e9f64174800fe1aa024fbdb0b21a6083a158/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Dec 11 09:17:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b261e1b794c9e24e398799eb3612e9f64174800fe1aa024fbdb0b21a6083a158/merged/etc/alertmanager supports timestamps until 2038 (0x7fffffff)
Dec 11 09:17:33 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Dec 11 09:17:33 compute-0 podman[96271]: 2025-12-11 09:17:33.600823528 +0000 UTC m=+0.908616467 container init f6be69c7d77625e49bceb27822a33c1c522509b882e158ee75cfb836ad9761a2 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 09:17:33 compute-0 podman[96271]: 2025-12-11 09:17:33.606981978 +0000 UTC m=+0.914774917 container start f6be69c7d77625e49bceb27822a33c1c522509b882e158ee75cfb836ad9761a2 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 09:17:33 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-alertmanager-compute-0[96287]: ts=2025-12-11T09:17:33.638Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)"
Dec 11 09:17:33 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-alertmanager-compute-0[96287]: ts=2025-12-11T09:17:33.638Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)"
Dec 11 09:17:33 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-alertmanager-compute-0[96287]: ts=2025-12-11T09:17:33.646Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.122.100 port=9094
Dec 11 09:17:33 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-alertmanager-compute-0[96287]: ts=2025-12-11T09:17:33.649Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s
Dec 11 09:17:33 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-alertmanager-compute-0[96287]: ts=2025-12-11T09:17:33.681Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml
Dec 11 09:17:33 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-alertmanager-compute-0[96287]: ts=2025-12-11T09:17:33.681Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml
Dec 11 09:17:33 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-alertmanager-compute-0[96287]: ts=2025-12-11T09:17:33.686Z caller=tls_config.go:232 level=info msg="Listening on" address=192.168.122.100:9093
Dec 11 09:17:33 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-alertmanager-compute-0[96287]: ts=2025-12-11T09:17:33.686Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=192.168.122.100:9093
Dec 11 09:17:33 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Dec 11 09:17:33 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Dec 11 09:17:34 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Dec 11 09:17:34 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:34 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef700022a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:34 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v92: 353 pgs: 4 unknown, 4 active+remapped, 345 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:17:34 compute-0 bash[96271]: f6be69c7d77625e49bceb27822a33c1c522509b882e158ee75cfb836ad9761a2
Dec 11 09:17:34 compute-0 systemd[1]: Started Ceph alertmanager.compute-0 for 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060.
Dec 11 09:17:35 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:35 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef54004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:35 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:35 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef60003490 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:35 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-alertmanager-compute-0[96287]: ts=2025-12-11T09:17:35.649Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000045087s
Dec 11 09:17:36 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:17:36 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 83 pg[10.18( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=83) [1] r=0 lpr=83 pi=[61,83)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:36 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 83 pg[10.6( v 56'1088 (0'0,56'1088] local-lis/les=81/82 n=6 ec=61/50 lis/c=81/70 les/c/f=82/71/0 sis=83 pruub=12.470537186s) [0] async=[0] r=-1 lpr=83 pi=[70,83)/1 crt=56'1088 mlcod 56'1088 active pruub 209.802474976s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:36 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 83 pg[10.6( v 56'1088 (0'0,56'1088] local-lis/les=81/82 n=6 ec=61/50 lis/c=81/70 les/c/f=82/71/0 sis=83 pruub=12.470464706s) [0] r=-1 lpr=83 pi=[70,83)/1 crt=56'1088 mlcod 0'0 unknown NOTIFY pruub 209.802474976s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:36 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 83 pg[10.8( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=83) [1] r=0 lpr=83 pi=[61,83)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:36 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 83 pg[10.16( v 56'1088 (0'0,56'1088] local-lis/les=81/82 n=4 ec=61/50 lis/c=81/70 les/c/f=82/71/0 sis=83 pruub=12.471956253s) [0] async=[0] r=-1 lpr=83 pi=[70,83)/1 crt=56'1088 mlcod 56'1088 active pruub 209.804458618s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:36 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 83 pg[10.16( v 56'1088 (0'0,56'1088] local-lis/les=81/82 n=4 ec=61/50 lis/c=81/70 les/c/f=82/71/0 sis=83 pruub=12.471881866s) [0] r=-1 lpr=83 pi=[70,83)/1 crt=56'1088 mlcod 0'0 unknown NOTIFY pruub 209.804458618s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:36 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 83 pg[10.1e( v 56'1088 (0'0,56'1088] local-lis/les=81/82 n=5 ec=61/50 lis/c=81/70 les/c/f=82/71/0 sis=83 pruub=12.471822739s) [0] async=[0] r=-1 lpr=83 pi=[70,83)/1 crt=56'1088 mlcod 56'1088 active pruub 209.804580688s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:36 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 83 pg[10.1e( v 56'1088 (0'0,56'1088] local-lis/les=81/82 n=5 ec=61/50 lis/c=81/70 les/c/f=82/71/0 sis=83 pruub=12.471715927s) [0] r=-1 lpr=83 pi=[70,83)/1 crt=56'1088 mlcod 0'0 unknown NOTIFY pruub 209.804580688s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:36 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 83 pg[10.e( v 56'1088 (0'0,56'1088] local-lis/les=81/82 n=6 ec=61/50 lis/c=81/70 les/c/f=82/71/0 sis=83 pruub=12.445162773s) [0] async=[0] r=-1 lpr=83 pi=[70,83)/1 crt=56'1088 mlcod 56'1088 active pruub 209.778884888s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:36 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 83 pg[10.e( v 56'1088 (0'0,56'1088] local-lis/les=81/82 n=6 ec=61/50 lis/c=81/70 les/c/f=82/71/0 sis=83 pruub=12.444724083s) [0] r=-1 lpr=83 pi=[70,83)/1 crt=56'1088 mlcod 0'0 unknown NOTIFY pruub 209.778884888s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:36 compute-0 ceph-mon[74426]: 8.1a scrub starts
Dec 11 09:17:36 compute-0 ceph-mon[74426]: 11.19 deep-scrub starts
Dec 11 09:17:36 compute-0 ceph-mon[74426]: 11.19 deep-scrub ok
Dec 11 09:17:36 compute-0 ceph-mon[74426]: 8.1a scrub ok
Dec 11 09:17:36 compute-0 ceph-mon[74426]: 12.1 scrub starts
Dec 11 09:17:36 compute-0 ceph-mon[74426]: 12.1 scrub ok
Dec 11 09:17:36 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Dec 11 09:17:36 compute-0 ceph-mon[74426]: osdmap e82: 3 total, 3 up, 3 in
Dec 11 09:17:36 compute-0 ceph-mon[74426]: 11.16 scrub starts
Dec 11 09:17:36 compute-0 ceph-mon[74426]: 11.16 scrub ok
Dec 11 09:17:36 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Dec 11 09:17:36 compute-0 ceph-mon[74426]: 9.12 scrub starts
Dec 11 09:17:36 compute-0 ceph-mon[74426]: 9.12 scrub ok
Dec 11 09:17:36 compute-0 sudo[95823]: pam_unix(sudo:session): session closed for user root
Dec 11 09:17:36 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Dec 11 09:17:36 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 11 09:17:36 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Dec 11 09:17:36 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Dec 11 09:17:36 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 84 pg[10.8( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=84) [1]/[0] r=-1 lpr=84 pi=[61,84)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:36 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 84 pg[10.8( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=84) [1]/[0] r=-1 lpr=84 pi=[61,84)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:36 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 84 pg[10.18( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=84) [1]/[0] r=-1 lpr=84 pi=[61,84)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:36 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 84 pg[10.18( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=84) [1]/[0] r=-1 lpr=84 pi=[61,84)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:36 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:36 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef4c002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:36 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-haproxy-nfs-cephfs-compute-0-qtoxfz[95274]: [WARNING] 344/091736 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 11 09:17:36 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Dec 11 09:17:36 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:36 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 11 09:17:36 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Dec 11 09:17:36 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v94: 353 pgs: 4 unknown, 4 active+remapped, 345 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:17:37 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:37 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef700022a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:37 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:37 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef54004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:37 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:37 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Dec 11 09:17:37 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Dec 11 09:17:37 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Dec 11 09:17:37 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Dec 11 09:17:38 compute-0 ceph-mgr[74715]: [progress INFO root] Writing back 27 completed events
Dec 11 09:17:38 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 11 09:17:38 compute-0 ceph-mon[74426]: pgmap v90: 353 pgs: 353 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 124 B/s, 7 objects/s recovering
Dec 11 09:17:38 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Dec 11 09:17:38 compute-0 ceph-mon[74426]: 12.1d scrub starts
Dec 11 09:17:38 compute-0 ceph-mon[74426]: 12.1d scrub ok
Dec 11 09:17:38 compute-0 ceph-mon[74426]: 8.19 scrub starts
Dec 11 09:17:38 compute-0 ceph-mon[74426]: 8.19 scrub ok
Dec 11 09:17:38 compute-0 ceph-mon[74426]: osdmap e83: 3 total, 3 up, 3 in
Dec 11 09:17:38 compute-0 ceph-mon[74426]: 9.18 scrub starts
Dec 11 09:17:38 compute-0 ceph-mon[74426]: 9.18 scrub ok
Dec 11 09:17:38 compute-0 ceph-mon[74426]: pgmap v92: 353 pgs: 4 unknown, 4 active+remapped, 345 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:17:38 compute-0 ceph-mon[74426]: 11.1a scrub starts
Dec 11 09:17:38 compute-0 ceph-mon[74426]: 11.1a scrub ok
Dec 11 09:17:38 compute-0 ceph-mon[74426]: 12.18 scrub starts
Dec 11 09:17:38 compute-0 ceph-mon[74426]: 12.18 scrub ok
Dec 11 09:17:38 compute-0 ceph-mon[74426]: 11.1e deep-scrub starts
Dec 11 09:17:38 compute-0 ceph-mon[74426]: 11.1e deep-scrub ok
Dec 11 09:17:38 compute-0 ceph-mon[74426]: osdmap e84: 3 total, 3 up, 3 in
Dec 11 09:17:38 compute-0 ceph-mon[74426]: 10.15 scrub starts
Dec 11 09:17:38 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:38 compute-0 ceph-mon[74426]: 10.15 scrub ok
Dec 11 09:17:38 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Dec 11 09:17:38 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:38 compute-0 ceph-mgr[74715]: [progress INFO root] complete: finished ev 8f0fe8b0-6f41-46a4-ae80-5ffaf388eb49 (Updating alertmanager deployment (+1 -> 1))
Dec 11 09:17:38 compute-0 ceph-mgr[74715]: [progress INFO root] Completed event 8f0fe8b0-6f41-46a4-ae80-5ffaf388eb49 (Updating alertmanager deployment (+1 -> 1)) in 12 seconds
Dec 11 09:17:38 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Dec 11 09:17:38 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Dec 11 09:17:38 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:38 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:38 compute-0 ceph-mgr[74715]: [progress INFO root] update: starting ev bf6fdf63-6c2c-4774-94dc-9ab99b149cb9 (Updating grafana deployment (+1 -> 1))
Dec 11 09:17:38 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.services.monitoring] Regenerating cephadm self-signed grafana TLS certificates
Dec 11 09:17:38 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Regenerating cephadm self-signed grafana TLS certificates
Dec 11 09:17:38 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.grafana_cert}] v 0)
Dec 11 09:17:38 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:38 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.grafana_key}] v 0)
Dec 11 09:17:38 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:38 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"} v 0)
Dec 11 09:17:38 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Dec 11 09:17:38 compute-0 ceph-mgr[74715]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Dec 11 09:17:38 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_SSL_VERIFY}] v 0)
Dec 11 09:17:38 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:38 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:38 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef60003490 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:38 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Deploying daemon grafana.compute-0 on compute-0
Dec 11 09:17:38 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Deploying daemon grafana.compute-0 on compute-0
Dec 11 09:17:38 compute-0 sudo[96315]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:17:38 compute-0 sudo[96315]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:17:38 compute-0 sudo[96315]: pam_unix(sudo:session): session closed for user root
Dec 11 09:17:38 compute-0 sudo[96340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/grafana:10.4.0 --timeout 895 _orch deploy --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:17:38 compute-0 sudo[96340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:17:38 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Dec 11 09:17:38 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Dec 11 09:17:38 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v96: 353 pgs: 2 remapped+peering, 4 active+remapped, 347 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 91 B/s, 5 objects/s recovering
Dec 11 09:17:39 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:39 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef4c002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:39 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:39 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef70003730 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:39 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Dec 11 09:17:39 compute-0 ceph-mon[74426]: 8.1c scrub starts
Dec 11 09:17:39 compute-0 ceph-mon[74426]: 8.1c scrub ok
Dec 11 09:17:39 compute-0 ceph-mon[74426]: pgmap v94: 353 pgs: 4 unknown, 4 active+remapped, 345 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:17:39 compute-0 ceph-mon[74426]: 11.1c scrub starts
Dec 11 09:17:39 compute-0 ceph-mon[74426]: 11.1c scrub ok
Dec 11 09:17:39 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:39 compute-0 ceph-mon[74426]: 9.19 scrub starts
Dec 11 09:17:39 compute-0 ceph-mon[74426]: 9.19 scrub ok
Dec 11 09:17:39 compute-0 ceph-mon[74426]: 9.13 scrub starts
Dec 11 09:17:39 compute-0 ceph-mon[74426]: 9.13 scrub ok
Dec 11 09:17:39 compute-0 ceph-mon[74426]: 11.1d scrub starts
Dec 11 09:17:39 compute-0 ceph-mon[74426]: 11.1d scrub ok
Dec 11 09:17:39 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:39 compute-0 ceph-mon[74426]: osdmap e85: 3 total, 3 up, 3 in
Dec 11 09:17:39 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:39 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:39 compute-0 ceph-mon[74426]: Regenerating cephadm self-signed grafana TLS certificates
Dec 11 09:17:39 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:39 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:39 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Dec 11 09:17:39 compute-0 ceph-mon[74426]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Dec 11 09:17:39 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:39 compute-0 ceph-mon[74426]: Deploying daemon grafana.compute-0 on compute-0
Dec 11 09:17:39 compute-0 ceph-mon[74426]: 9.1e scrub starts
Dec 11 09:17:39 compute-0 ceph-mon[74426]: 9.1e scrub ok
Dec 11 09:17:39 compute-0 ceph-mon[74426]: 9.1d scrub starts
Dec 11 09:17:39 compute-0 ceph-mon[74426]: 9.1d scrub ok
Dec 11 09:17:39 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Dec 11 09:17:39 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Dec 11 09:17:40 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Dec 11 09:17:40 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Dec 11 09:17:40 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 86 pg[10.8( v 56'1088 (0'0,56'1088] local-lis/les=0/0 n=7 ec=61/50 lis/c=84/61 les/c/f=85/62/0 sis=86) [1] r=0 lpr=86 pi=[61,86)/1 luod=0'0 crt=56'1088 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:40 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 86 pg[10.8( v 56'1088 (0'0,56'1088] local-lis/les=0/0 n=7 ec=61/50 lis/c=84/61 les/c/f=85/62/0 sis=86) [1] r=0 lpr=86 pi=[61,86)/1 crt=56'1088 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:40 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 86 pg[10.18( v 56'1088 (0'0,56'1088] local-lis/les=0/0 n=5 ec=61/50 lis/c=84/61 les/c/f=85/62/0 sis=86) [1] r=0 lpr=86 pi=[61,86)/1 luod=0'0 crt=56'1088 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:40 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 86 pg[10.18( v 56'1088 (0'0,56'1088] local-lis/les=0/0 n=5 ec=61/50 lis/c=84/61 les/c/f=85/62/0 sis=86) [1] r=0 lpr=86 pi=[61,86)/1 crt=56'1088 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:40 compute-0 ceph-mon[74426]: pgmap v96: 353 pgs: 2 remapped+peering, 4 active+remapped, 347 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 91 B/s, 5 objects/s recovering
Dec 11 09:17:40 compute-0 ceph-mon[74426]: 12.1a scrub starts
Dec 11 09:17:40 compute-0 ceph-mon[74426]: 12.1a scrub ok
Dec 11 09:17:40 compute-0 ceph-mon[74426]: 9.1f scrub starts
Dec 11 09:17:40 compute-0 ceph-mon[74426]: 9.1f scrub ok
Dec 11 09:17:40 compute-0 ceph-mon[74426]: osdmap e86: 3 total, 3 up, 3 in
Dec 11 09:17:40 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:40 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef54004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:40 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v98: 353 pgs: 2 remapped+peering, 4 active+remapped, 347 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 6.5 KiB/s rd, 0 B/s wr, 12 op/s; 146 B/s, 5 objects/s recovering
Dec 11 09:17:40 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Dec 11 09:17:40 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Dec 11 09:17:41 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:41 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef60003490 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:41 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Dec 11 09:17:41 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Dec 11 09:17:41 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Dec 11 09:17:41 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 87 pg[10.18( v 56'1088 (0'0,56'1088] local-lis/les=86/87 n=5 ec=61/50 lis/c=84/61 les/c/f=85/62/0 sis=86) [1] r=0 lpr=86 pi=[61,86)/1 crt=56'1088 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:41 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 87 pg[10.8( v 56'1088 (0'0,56'1088] local-lis/les=86/87 n=7 ec=61/50 lis/c=84/61 les/c/f=85/62/0 sis=86) [1] r=0 lpr=86 pi=[61,86)/1 crt=56'1088 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:41 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:17:41 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:41 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef4c002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:41 compute-0 sudo[96446]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixllnpbxescvzxelogvayiqkjnftdlwh ; /usr/bin/python3'
Dec 11 09:17:41 compute-0 sudo[96446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:17:41 compute-0 ceph-mon[74426]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Dec 11 09:17:41 compute-0 ceph-mon[74426]: rocksdb: (Original Log Time 2025/12/11-09:17:41.665615) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 11 09:17:41 compute-0 ceph-mon[74426]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Dec 11 09:17:41 compute-0 ceph-mon[74426]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765444661665814, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7678, "num_deletes": 251, "total_data_size": 14590026, "memory_usage": 15475216, "flush_reason": "Manual Compaction"}
Dec 11 09:17:41 compute-0 ceph-mon[74426]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Dec 11 09:17:41 compute-0 ceph-mon[74426]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765444661774951, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 12597157, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 146, "largest_seqno": 7815, "table_properties": {"data_size": 12569183, "index_size": 17811, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9093, "raw_key_size": 87419, "raw_average_key_size": 24, "raw_value_size": 12500087, "raw_average_value_size": 3459, "num_data_blocks": 789, "num_entries": 3613, "num_filter_entries": 3613, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765444348, "oldest_key_time": 1765444348, "file_creation_time": 1765444661, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1ac8294-d02a-459a-8058-7f05c4f78e7d", "db_session_id": "U2WUY0PZPH8WS8N5I572", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Dec 11 09:17:41 compute-0 ceph-mon[74426]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 109375 microseconds, and 35960 cpu microseconds.
Dec 11 09:17:41 compute-0 ceph-mon[74426]: rocksdb: (Original Log Time 2025/12/11-09:17:41.775032) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 12597157 bytes OK
Dec 11 09:17:41 compute-0 ceph-mon[74426]: rocksdb: (Original Log Time 2025/12/11-09:17:41.775087) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Dec 11 09:17:41 compute-0 ceph-mon[74426]: rocksdb: (Original Log Time 2025/12/11-09:17:41.777047) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Dec 11 09:17:41 compute-0 ceph-mon[74426]: rocksdb: (Original Log Time 2025/12/11-09:17:41.777076) EVENT_LOG_v1 {"time_micros": 1765444661777071, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Dec 11 09:17:41 compute-0 ceph-mon[74426]: rocksdb: (Original Log Time 2025/12/11-09:17:41.777101) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Dec 11 09:17:41 compute-0 ceph-mon[74426]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 14555469, prev total WAL file size 14555469, number of live WAL files 2.
Dec 11 09:17:41 compute-0 ceph-mon[74426]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 11 09:17:41 compute-0 ceph-mon[74426]: rocksdb: (Original Log Time 2025/12/11-09:17:41.780231) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Dec 11 09:17:41 compute-0 ceph-mon[74426]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Dec 11 09:17:41 compute-0 ceph-mon[74426]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(12MB) 13(57KB) 8(1944B)]
Dec 11 09:17:41 compute-0 ceph-mon[74426]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765444661780430, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 12657593, "oldest_snapshot_seqno": -1}
Dec 11 09:17:41 compute-0 python3[96448]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps --daemon-type rgw --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:17:41 compute-0 ceph-mon[74426]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3430 keys, 12611317 bytes, temperature: kUnknown
Dec 11 09:17:41 compute-0 ceph-mon[74426]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765444661884642, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 12611317, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12583788, "index_size": 17883, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8581, "raw_key_size": 85564, "raw_average_key_size": 24, "raw_value_size": 12516262, "raw_average_value_size": 3649, "num_data_blocks": 794, "num_entries": 3430, "num_filter_entries": 3430, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765444346, "oldest_key_time": 0, "file_creation_time": 1765444661, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1ac8294-d02a-459a-8058-7f05c4f78e7d", "db_session_id": "U2WUY0PZPH8WS8N5I572", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Dec 11 09:17:41 compute-0 ceph-mon[74426]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 11 09:17:41 compute-0 ceph-mon[74426]: rocksdb: (Original Log Time 2025/12/11-09:17:41.884929) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 12611317 bytes
Dec 11 09:17:41 compute-0 ceph-mon[74426]: rocksdb: (Original Log Time 2025/12/11-09:17:41.886293) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 121.4 rd, 120.9 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(12.1, 0.0 +0.0 blob) out(12.0 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3722, records dropped: 292 output_compression: NoCompression
Dec 11 09:17:41 compute-0 ceph-mon[74426]: rocksdb: (Original Log Time 2025/12/11-09:17:41.886323) EVENT_LOG_v1 {"time_micros": 1765444661886313, "job": 4, "event": "compaction_finished", "compaction_time_micros": 104298, "compaction_time_cpu_micros": 29071, "output_level": 6, "num_output_files": 1, "total_output_size": 12611317, "num_input_records": 3722, "num_output_records": 3430, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 11 09:17:41 compute-0 ceph-mon[74426]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 11 09:17:41 compute-0 ceph-mon[74426]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765444661888275, "job": 4, "event": "table_file_deletion", "file_number": 19}
Dec 11 09:17:41 compute-0 ceph-mon[74426]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 11 09:17:41 compute-0 ceph-mon[74426]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765444661888385, "job": 4, "event": "table_file_deletion", "file_number": 13}
Dec 11 09:17:41 compute-0 ceph-mon[74426]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 11 09:17:41 compute-0 ceph-mon[74426]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765444661888466, "job": 4, "event": "table_file_deletion", "file_number": 8}
Dec 11 09:17:41 compute-0 ceph-mon[74426]: rocksdb: (Original Log Time 2025/12/11-09:17:41.779912) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 11 09:17:41 compute-0 podman[96450]: 2025-12-11 09:17:41.88788131 +0000 UTC m=+0.086002142 container create dfa645e373ee9a67d22f8d8b8e08691bb94c7b2401babf80845f2c7e221bec51 (image=quay.io/ceph/ceph:v19, name=quizzical_dhawan, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:17:41 compute-0 podman[96450]: 2025-12-11 09:17:41.82636388 +0000 UTC m=+0.024484712 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:17:41 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Dec 11 09:17:41 compute-0 systemd[1]: Started libpod-conmon-dfa645e373ee9a67d22f8d8b8e08691bb94c7b2401babf80845f2c7e221bec51.scope.
Dec 11 09:17:41 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Dec 11 09:17:41 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:17:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fbb8cfa41d047669123ac07169fb99eea15d2a32c64d37fb5bedb402d4820fc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:17:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fbb8cfa41d047669123ac07169fb99eea15d2a32c64d37fb5bedb402d4820fc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:17:41 compute-0 podman[96450]: 2025-12-11 09:17:41.973690703 +0000 UTC m=+0.171811545 container init dfa645e373ee9a67d22f8d8b8e08691bb94c7b2401babf80845f2c7e221bec51 (image=quay.io/ceph/ceph:v19, name=quizzical_dhawan, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 11 09:17:41 compute-0 podman[96450]: 2025-12-11 09:17:41.982441015 +0000 UTC m=+0.180561847 container start dfa645e373ee9a67d22f8d8b8e08691bb94c7b2401babf80845f2c7e221bec51 (image=quay.io/ceph/ceph:v19, name=quizzical_dhawan, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec 11 09:17:41 compute-0 podman[96450]: 2025-12-11 09:17:41.986052106 +0000 UTC m=+0.184172968 container attach dfa645e373ee9a67d22f8d8b8e08691bb94c7b2401babf80845f2c7e221bec51 (image=quay.io/ceph/ceph:v19, name=quizzical_dhawan, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:17:42 compute-0 quizzical_dhawan[96464]: ERROR: invalid flag --daemon-type
Dec 11 09:17:42 compute-0 systemd[1]: libpod-dfa645e373ee9a67d22f8d8b8e08691bb94c7b2401babf80845f2c7e221bec51.scope: Deactivated successfully.
Dec 11 09:17:42 compute-0 podman[96450]: 2025-12-11 09:17:42.04188096 +0000 UTC m=+0.240001792 container died dfa645e373ee9a67d22f8d8b8e08691bb94c7b2401babf80845f2c7e221bec51 (image=quay.io/ceph/ceph:v19, name=quizzical_dhawan, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec 11 09:17:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-4fbb8cfa41d047669123ac07169fb99eea15d2a32c64d37fb5bedb402d4820fc-merged.mount: Deactivated successfully.
Dec 11 09:17:42 compute-0 podman[96450]: 2025-12-11 09:17:42.086769924 +0000 UTC m=+0.284890756 container remove dfa645e373ee9a67d22f8d8b8e08691bb94c7b2401babf80845f2c7e221bec51 (image=quay.io/ceph/ceph:v19, name=quizzical_dhawan, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:17:42 compute-0 ceph-mon[74426]: 11.13 scrub starts
Dec 11 09:17:42 compute-0 ceph-mon[74426]: 11.13 scrub ok
Dec 11 09:17:42 compute-0 ceph-mon[74426]: pgmap v98: 353 pgs: 2 remapped+peering, 4 active+remapped, 347 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 6.5 KiB/s rd, 0 B/s wr, 12 op/s; 146 B/s, 5 objects/s recovering
Dec 11 09:17:42 compute-0 ceph-mon[74426]: 8.1e scrub starts
Dec 11 09:17:42 compute-0 ceph-mon[74426]: 8.1e scrub ok
Dec 11 09:17:42 compute-0 ceph-mon[74426]: 10.17 scrub starts
Dec 11 09:17:42 compute-0 ceph-mon[74426]: 10.17 scrub ok
Dec 11 09:17:42 compute-0 ceph-mon[74426]: osdmap e87: 3 total, 3 up, 3 in
Dec 11 09:17:42 compute-0 systemd[1]: libpod-conmon-dfa645e373ee9a67d22f8d8b8e08691bb94c7b2401babf80845f2c7e221bec51.scope: Deactivated successfully.
Dec 11 09:17:42 compute-0 sudo[96446]: pam_unix(sudo:session): session closed for user root
Dec 11 09:17:42 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:42 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef70003730 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:42 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v100: 353 pgs: 2 remapped+peering, 4 active+remapped, 347 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 6.5 KiB/s rd, 0 B/s wr, 12 op/s; 146 B/s, 5 objects/s recovering
Dec 11 09:17:42 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Dec 11 09:17:42 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Dec 11 09:17:43 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:43 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef54004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:43 compute-0 ceph-mon[74426]: 9.5 scrub starts
Dec 11 09:17:43 compute-0 ceph-mon[74426]: 9.5 scrub ok
Dec 11 09:17:43 compute-0 ceph-mon[74426]: 9.1c scrub starts
Dec 11 09:17:43 compute-0 ceph-mon[74426]: 9.1c scrub ok
Dec 11 09:17:43 compute-0 ceph-mon[74426]: 10.f scrub starts
Dec 11 09:17:43 compute-0 ceph-mon[74426]: 10.f scrub ok
Dec 11 09:17:43 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:43 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef60003490 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:43 compute-0 ceph-mgr[74715]: [progress WARNING root] Starting Global Recovery Event,6 pgs not in active + clean state
Dec 11 09:17:43 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-alertmanager-compute-0[96287]: ts=2025-12-11T09:17:43.652Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.003185835s
Dec 11 09:17:43 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Dec 11 09:17:43 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Dec 11 09:17:44 compute-0 ceph-mon[74426]: 11.a scrub starts
Dec 11 09:17:44 compute-0 ceph-mon[74426]: 11.a scrub ok
Dec 11 09:17:44 compute-0 ceph-mon[74426]: pgmap v100: 353 pgs: 2 remapped+peering, 4 active+remapped, 347 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 6.5 KiB/s rd, 0 B/s wr, 12 op/s; 146 B/s, 5 objects/s recovering
Dec 11 09:17:44 compute-0 ceph-mon[74426]: 8.1d scrub starts
Dec 11 09:17:44 compute-0 ceph-mon[74426]: 8.1d scrub ok
Dec 11 09:17:44 compute-0 ceph-mon[74426]: 11.1f scrub starts
Dec 11 09:17:44 compute-0 ceph-mon[74426]: 11.1f scrub ok
Dec 11 09:17:44 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:44 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef4c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:44 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v101: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:17:44 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0)
Dec 11 09:17:44 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Dec 11 09:17:44 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Dec 11 09:17:44 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Dec 11 09:17:45 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:45 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef70003730 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:45 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Dec 11 09:17:45 compute-0 ceph-mon[74426]: 8.5 scrub starts
Dec 11 09:17:45 compute-0 ceph-mon[74426]: 8.5 scrub ok
Dec 11 09:17:45 compute-0 ceph-mon[74426]: 11.1b scrub starts
Dec 11 09:17:45 compute-0 ceph-mon[74426]: 11.1b scrub ok
Dec 11 09:17:45 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Dec 11 09:17:45 compute-0 ceph-mon[74426]: 11.10 scrub starts
Dec 11 09:17:45 compute-0 ceph-mon[74426]: 11.10 scrub ok
Dec 11 09:17:45 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Dec 11 09:17:45 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Dec 11 09:17:45 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Dec 11 09:17:45 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:45 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef54004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:45 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Dec 11 09:17:45 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Dec 11 09:17:46 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:17:46 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Dec 11 09:17:46 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Dec 11 09:17:46 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Dec 11 09:17:46 compute-0 ceph-mon[74426]: pgmap v101: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:17:46 compute-0 ceph-mon[74426]: 8.9 scrub starts
Dec 11 09:17:46 compute-0 ceph-mon[74426]: 8.9 scrub ok
Dec 11 09:17:46 compute-0 ceph-mon[74426]: 8.12 scrub starts
Dec 11 09:17:46 compute-0 ceph-mon[74426]: 8.12 scrub ok
Dec 11 09:17:46 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Dec 11 09:17:46 compute-0 ceph-mon[74426]: osdmap e88: 3 total, 3 up, 3 in
Dec 11 09:17:46 compute-0 ceph-mon[74426]: 8.13 scrub starts
Dec 11 09:17:46 compute-0 ceph-mon[74426]: 8.13 scrub ok
Dec 11 09:17:46 compute-0 ceph-mon[74426]: osdmap e89: 3 total, 3 up, 3 in
Dec 11 09:17:46 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:46 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef60003490 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:46 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v104: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:17:46 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0)
Dec 11 09:17:46 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Dec 11 09:17:46 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Dec 11 09:17:46 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Dec 11 09:17:47 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:47 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef4c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:47 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Dec 11 09:17:47 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Dec 11 09:17:47 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Dec 11 09:17:47 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Dec 11 09:17:47 compute-0 ceph-mon[74426]: 9.b scrub starts
Dec 11 09:17:47 compute-0 ceph-mon[74426]: 9.b scrub ok
Dec 11 09:17:47 compute-0 ceph-mon[74426]: 8.4 scrub starts
Dec 11 09:17:47 compute-0 ceph-mon[74426]: 8.4 scrub ok
Dec 11 09:17:47 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Dec 11 09:17:47 compute-0 ceph-mon[74426]: 11.11 scrub starts
Dec 11 09:17:47 compute-0 ceph-mon[74426]: 11.11 scrub ok
Dec 11 09:17:47 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Dec 11 09:17:47 compute-0 ceph-mon[74426]: osdmap e90: 3 total, 3 up, 3 in
Dec 11 09:17:47 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:47 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef70004440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:47 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 12.12 scrub starts
Dec 11 09:17:48 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 90 pg[10.a( v 56'1088 (0'0,56'1088] local-lis/les=70/71 n=6 ec=61/50 lis/c=70/70 les/c/f=71/71/0 sis=90 pruub=9.118812561s) [0] r=-1 lpr=90 pi=[70,90)/1 crt=56'1088 mlcod 0'0 active pruub 218.339050293s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:48 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 90 pg[10.a( v 56'1088 (0'0,56'1088] local-lis/les=70/71 n=6 ec=61/50 lis/c=70/70 les/c/f=71/71/0 sis=90 pruub=9.118639946s) [0] r=-1 lpr=90 pi=[70,90)/1 crt=56'1088 mlcod 0'0 unknown NOTIFY pruub 218.339050293s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:48 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 90 pg[10.1a( v 56'1088 (0'0,56'1088] local-lis/les=70/71 n=4 ec=61/50 lis/c=70/70 les/c/f=71/71/0 sis=90 pruub=9.118593216s) [0] r=-1 lpr=90 pi=[70,90)/1 crt=56'1088 mlcod 0'0 active pruub 218.339157104s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:48 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 90 pg[10.1a( v 56'1088 (0'0,56'1088] local-lis/les=70/71 n=4 ec=61/50 lis/c=70/70 les/c/f=71/71/0 sis=90 pruub=9.118563652s) [0] r=-1 lpr=90 pi=[70,90)/1 crt=56'1088 mlcod 0'0 unknown NOTIFY pruub 218.339157104s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:48 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 12.12 scrub ok
Dec 11 09:17:48 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Dec 11 09:17:48 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Dec 11 09:17:48 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Dec 11 09:17:48 compute-0 ceph-mon[74426]: pgmap v104: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:17:48 compute-0 ceph-mon[74426]: 8.a scrub starts
Dec 11 09:17:48 compute-0 ceph-mon[74426]: 8.a scrub ok
Dec 11 09:17:48 compute-0 ceph-mon[74426]: 11.5 scrub starts
Dec 11 09:17:48 compute-0 ceph-mon[74426]: 11.5 scrub ok
Dec 11 09:17:48 compute-0 ceph-mon[74426]: 12.12 scrub starts
Dec 11 09:17:48 compute-0 ceph-mon[74426]: 12.12 scrub ok
Dec 11 09:17:48 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 91 pg[10.a( v 56'1088 (0'0,56'1088] local-lis/les=70/71 n=6 ec=61/50 lis/c=70/70 les/c/f=71/71/0 sis=91) [0]/[1] r=0 lpr=91 pi=[70,91)/1 crt=56'1088 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:48 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 91 pg[10.1a( v 56'1088 (0'0,56'1088] local-lis/les=70/71 n=4 ec=61/50 lis/c=70/70 les/c/f=71/71/0 sis=91) [0]/[1] r=0 lpr=91 pi=[70,91)/1 crt=56'1088 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:48 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 91 pg[10.a( v 56'1088 (0'0,56'1088] local-lis/les=70/71 n=6 ec=61/50 lis/c=70/70 les/c/f=71/71/0 sis=91) [0]/[1] r=0 lpr=91 pi=[70,91)/1 crt=56'1088 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:48 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 91 pg[10.1a( v 56'1088 (0'0,56'1088] local-lis/les=70/71 n=4 ec=61/50 lis/c=70/70 les/c/f=71/71/0 sis=91) [0]/[1] r=0 lpr=91 pi=[70,91)/1 crt=56'1088 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:48 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:48 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef70004440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:48 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 12.6 scrub starts
Dec 11 09:17:48 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 12.6 scrub ok
Dec 11 09:17:48 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v107: 353 pgs: 2 active+remapped, 351 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 82 B/s, 2 objects/s recovering
Dec 11 09:17:48 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0)
Dec 11 09:17:48 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Dec 11 09:17:49 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:49 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef54004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:49 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Dec 11 09:17:49 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Dec 11 09:17:49 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Dec 11 09:17:49 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Dec 11 09:17:49 compute-0 ceph-mon[74426]: 12.13 scrub starts
Dec 11 09:17:49 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 92 pg[10.1a( v 56'1088 (0'0,56'1088] local-lis/les=91/92 n=4 ec=61/50 lis/c=70/70 les/c/f=71/71/0 sis=91) [0]/[1] async=[0] r=0 lpr=91 pi=[70,91)/1 crt=56'1088 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:49 compute-0 ceph-mon[74426]: 12.13 scrub ok
Dec 11 09:17:49 compute-0 ceph-mon[74426]: 11.4 scrub starts
Dec 11 09:17:49 compute-0 ceph-mon[74426]: 11.4 scrub ok
Dec 11 09:17:49 compute-0 ceph-mon[74426]: osdmap e91: 3 total, 3 up, 3 in
Dec 11 09:17:49 compute-0 ceph-mon[74426]: 12.6 scrub starts
Dec 11 09:17:49 compute-0 ceph-mon[74426]: 12.6 scrub ok
Dec 11 09:17:49 compute-0 ceph-mon[74426]: pgmap v107: 353 pgs: 2 active+remapped, 351 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 82 B/s, 2 objects/s recovering
Dec 11 09:17:49 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Dec 11 09:17:49 compute-0 ceph-mon[74426]: 9.e scrub starts
Dec 11 09:17:49 compute-0 ceph-mon[74426]: 9.e scrub ok
Dec 11 09:17:49 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Dec 11 09:17:49 compute-0 ceph-mon[74426]: osdmap e92: 3 total, 3 up, 3 in
Dec 11 09:17:49 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 92 pg[10.a( v 56'1088 (0'0,56'1088] local-lis/les=91/92 n=6 ec=61/50 lis/c=70/70 les/c/f=71/71/0 sis=91) [0]/[1] async=[0] r=0 lpr=91 pi=[70,91)/1 crt=56'1088 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:49 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:49 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef50000b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:49 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 12.b deep-scrub starts
Dec 11 09:17:49 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 12.b deep-scrub ok
Dec 11 09:17:50 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Dec 11 09:17:50 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Dec 11 09:17:50 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Dec 11 09:17:50 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 93 pg[10.a( v 56'1088 (0'0,56'1088] local-lis/les=91/92 n=6 ec=61/50 lis/c=91/70 les/c/f=92/71/0 sis=93 pruub=14.891564369s) [0] async=[0] r=-1 lpr=93 pi=[70,93)/1 crt=56'1088 mlcod 56'1088 active pruub 226.461456299s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:50 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 93 pg[10.a( v 56'1088 (0'0,56'1088] local-lis/les=91/92 n=6 ec=61/50 lis/c=91/70 les/c/f=92/71/0 sis=93 pruub=14.891438484s) [0] r=-1 lpr=93 pi=[70,93)/1 crt=56'1088 mlcod 0'0 unknown NOTIFY pruub 226.461456299s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:50 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 93 pg[10.1a( v 56'1088 (0'0,56'1088] local-lis/les=91/92 n=4 ec=61/50 lis/c=91/70 les/c/f=92/71/0 sis=93 pruub=14.888053894s) [0] async=[0] r=-1 lpr=93 pi=[70,93)/1 crt=56'1088 mlcod 56'1088 active pruub 226.458923340s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:50 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 93 pg[10.1a( v 56'1088 (0'0,56'1088] local-lis/les=91/92 n=4 ec=61/50 lis/c=91/70 les/c/f=92/71/0 sis=93 pruub=14.888002396s) [0] r=-1 lpr=93 pi=[70,93)/1 crt=56'1088 mlcod 0'0 unknown NOTIFY pruub 226.458923340s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:50 compute-0 ceph-mon[74426]: 12.9 scrub starts
Dec 11 09:17:50 compute-0 ceph-mon[74426]: 12.9 scrub ok
Dec 11 09:17:50 compute-0 ceph-mon[74426]: 12.b deep-scrub starts
Dec 11 09:17:50 compute-0 ceph-mon[74426]: 12.b deep-scrub ok
Dec 11 09:17:50 compute-0 ceph-mon[74426]: 9.a deep-scrub starts
Dec 11 09:17:50 compute-0 ceph-mon[74426]: 9.a deep-scrub ok
Dec 11 09:17:50 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:50 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef60003490 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:50 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 12.10 deep-scrub starts
Dec 11 09:17:50 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 12.10 deep-scrub ok
Dec 11 09:17:50 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v110: 353 pgs: 2 active+remapped, 351 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 82 B/s, 2 objects/s recovering
Dec 11 09:17:50 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0)
Dec 11 09:17:50 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Dec 11 09:17:51 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:51 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef70004440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:51 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:17:51 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:51 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef54004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:51 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Dec 11 09:17:51 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Dec 11 09:17:51 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Dec 11 09:17:51 compute-0 ceph-mon[74426]: 12.4 scrub starts
Dec 11 09:17:51 compute-0 ceph-mon[74426]: 12.4 scrub ok
Dec 11 09:17:51 compute-0 ceph-mon[74426]: osdmap e93: 3 total, 3 up, 3 in
Dec 11 09:17:51 compute-0 ceph-mon[74426]: 12.10 deep-scrub starts
Dec 11 09:17:51 compute-0 ceph-mon[74426]: 12.10 deep-scrub ok
Dec 11 09:17:51 compute-0 ceph-mon[74426]: pgmap v110: 353 pgs: 2 active+remapped, 351 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 82 B/s, 2 objects/s recovering
Dec 11 09:17:51 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Dec 11 09:17:51 compute-0 ceph-mon[74426]: 8.f scrub starts
Dec 11 09:17:51 compute-0 ceph-mon[74426]: 8.f scrub ok
Dec 11 09:17:51 compute-0 ceph-mon[74426]: 8.8 scrub starts
Dec 11 09:17:51 compute-0 ceph-mon[74426]: 8.8 scrub ok
Dec 11 09:17:51 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Dec 11 09:17:51 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 12.c scrub starts
Dec 11 09:17:51 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 12.c scrub ok
Dec 11 09:17:51 compute-0 podman[96408]: 2025-12-11 09:17:51.91050876 +0000 UTC m=+12.607442511 container create 63a3765393b5529430d74b210da1e509e38f3d2bb533da57d56d0833d9f5db8f (image=quay.io/ceph/grafana:10.4.0, name=frosty_clarke, maintainer=Grafana Labs <hello@grafana.com>)
Dec 11 09:17:51 compute-0 podman[96408]: 2025-12-11 09:17:51.88730042 +0000 UTC m=+12.584234191 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Dec 11 09:17:51 compute-0 systemd[1]: Started libpod-conmon-63a3765393b5529430d74b210da1e509e38f3d2bb533da57d56d0833d9f5db8f.scope.
Dec 11 09:17:51 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:17:52 compute-0 podman[96408]: 2025-12-11 09:17:52.000684389 +0000 UTC m=+12.697618150 container init 63a3765393b5529430d74b210da1e509e38f3d2bb533da57d56d0833d9f5db8f (image=quay.io/ceph/grafana:10.4.0, name=frosty_clarke, maintainer=Grafana Labs <hello@grafana.com>)
Dec 11 09:17:52 compute-0 podman[96408]: 2025-12-11 09:17:52.010908538 +0000 UTC m=+12.707842289 container start 63a3765393b5529430d74b210da1e509e38f3d2bb533da57d56d0833d9f5db8f (image=quay.io/ceph/grafana:10.4.0, name=frosty_clarke, maintainer=Grafana Labs <hello@grafana.com>)
Dec 11 09:17:52 compute-0 podman[96408]: 2025-12-11 09:17:52.014376944 +0000 UTC m=+12.711310695 container attach 63a3765393b5529430d74b210da1e509e38f3d2bb533da57d56d0833d9f5db8f (image=quay.io/ceph/grafana:10.4.0, name=frosty_clarke, maintainer=Grafana Labs <hello@grafana.com>)
Dec 11 09:17:52 compute-0 frosty_clarke[96698]: 472 0
Dec 11 09:17:52 compute-0 systemd[1]: libpod-63a3765393b5529430d74b210da1e509e38f3d2bb533da57d56d0833d9f5db8f.scope: Deactivated successfully.
Dec 11 09:17:52 compute-0 podman[96408]: 2025-12-11 09:17:52.01552141 +0000 UTC m=+12.712455161 container died 63a3765393b5529430d74b210da1e509e38f3d2bb533da57d56d0833d9f5db8f (image=quay.io/ceph/grafana:10.4.0, name=frosty_clarke, maintainer=Grafana Labs <hello@grafana.com>)
Dec 11 09:17:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-af392a212033a4a560822e047116480b6bc46da81068331d078e7a14b99efb8d-merged.mount: Deactivated successfully.
Dec 11 09:17:52 compute-0 podman[96408]: 2025-12-11 09:17:52.058731262 +0000 UTC m=+12.755665033 container remove 63a3765393b5529430d74b210da1e509e38f3d2bb533da57d56d0833d9f5db8f (image=quay.io/ceph/grafana:10.4.0, name=frosty_clarke, maintainer=Grafana Labs <hello@grafana.com>)
Dec 11 09:17:52 compute-0 systemd[1]: libpod-conmon-63a3765393b5529430d74b210da1e509e38f3d2bb533da57d56d0833d9f5db8f.scope: Deactivated successfully.
Dec 11 09:17:52 compute-0 podman[96714]: 2025-12-11 09:17:52.13762283 +0000 UTC m=+0.049383063 container create ffd5c10d29166fc823ee86f3fc7ea1e107d562e34341a668b72061d6bf7a7389 (image=quay.io/ceph/grafana:10.4.0, name=beautiful_montalcini, maintainer=Grafana Labs <hello@grafana.com>)
Dec 11 09:17:52 compute-0 systemd[1]: Started libpod-conmon-ffd5c10d29166fc823ee86f3fc7ea1e107d562e34341a668b72061d6bf7a7389.scope.
Dec 11 09:17:52 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:17:52 compute-0 podman[96714]: 2025-12-11 09:17:52.117635801 +0000 UTC m=+0.029396054 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Dec 11 09:17:52 compute-0 sudo[96756]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-putvjwguatdlzfafwxxqafvjrdjbwpma ; /usr/bin/python3'
Dec 11 09:17:52 compute-0 podman[96714]: 2025-12-11 09:17:52.21811003 +0000 UTC m=+0.129870283 container init ffd5c10d29166fc823ee86f3fc7ea1e107d562e34341a668b72061d6bf7a7389 (image=quay.io/ceph/grafana:10.4.0, name=beautiful_montalcini, maintainer=Grafana Labs <hello@grafana.com>)
Dec 11 09:17:52 compute-0 sudo[96756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:17:52 compute-0 podman[96714]: 2025-12-11 09:17:52.225464228 +0000 UTC m=+0.137224461 container start ffd5c10d29166fc823ee86f3fc7ea1e107d562e34341a668b72061d6bf7a7389 (image=quay.io/ceph/grafana:10.4.0, name=beautiful_montalcini, maintainer=Grafana Labs <hello@grafana.com>)
Dec 11 09:17:52 compute-0 beautiful_montalcini[96741]: 472 0
Dec 11 09:17:52 compute-0 systemd[1]: libpod-ffd5c10d29166fc823ee86f3fc7ea1e107d562e34341a668b72061d6bf7a7389.scope: Deactivated successfully.
Dec 11 09:17:52 compute-0 podman[96714]: 2025-12-11 09:17:52.230514844 +0000 UTC m=+0.142275077 container attach ffd5c10d29166fc823ee86f3fc7ea1e107d562e34341a668b72061d6bf7a7389 (image=quay.io/ceph/grafana:10.4.0, name=beautiful_montalcini, maintainer=Grafana Labs <hello@grafana.com>)
Dec 11 09:17:52 compute-0 podman[96714]: 2025-12-11 09:17:52.231206806 +0000 UTC m=+0.142967039 container died ffd5c10d29166fc823ee86f3fc7ea1e107d562e34341a668b72061d6bf7a7389 (image=quay.io/ceph/grafana:10.4.0, name=beautiful_montalcini, maintainer=Grafana Labs <hello@grafana.com>)
Dec 11 09:17:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-82e5aa6d6f796c375b45da6725eb138ac38b70e205d68e3f1e74b5adecf7d809-merged.mount: Deactivated successfully.
Dec 11 09:17:52 compute-0 podman[96714]: 2025-12-11 09:17:52.273586652 +0000 UTC m=+0.185346885 container remove ffd5c10d29166fc823ee86f3fc7ea1e107d562e34341a668b72061d6bf7a7389 (image=quay.io/ceph/grafana:10.4.0, name=beautiful_montalcini, maintainer=Grafana Labs <hello@grafana.com>)
Dec 11 09:17:52 compute-0 systemd[1]: libpod-conmon-ffd5c10d29166fc823ee86f3fc7ea1e107d562e34341a668b72061d6bf7a7389.scope: Deactivated successfully.
Dec 11 09:17:52 compute-0 systemd[1]: Reloading.
Dec 11 09:17:52 compute-0 python3[96759]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps --daemon-type rgw --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:17:52 compute-0 systemd-sysv-generator[96813]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:17:52 compute-0 systemd-rc-local-generator[96810]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:17:52 compute-0 podman[96774]: 2025-12-11 09:17:52.479818074 +0000 UTC m=+0.049174818 container create fd464258815e80b9032199b6c6524ef9c03276e8689bef2c0c9af3530c896e76 (image=quay.io/ceph/ceph:v19, name=pensive_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:17:52 compute-0 podman[96774]: 2025-12-11 09:17:52.45748427 +0000 UTC m=+0.026841024 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:17:52 compute-0 systemd[1]: Started libpod-conmon-fd464258815e80b9032199b6c6524ef9c03276e8689bef2c0c9af3530c896e76.scope.
Dec 11 09:17:52 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:52 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef60003490 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:52 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Dec 11 09:17:52 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:17:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0652ae7bf2c281cb7f7e07b9b04fe842aebabb75a05628341ac76e3203e5a978/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:17:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0652ae7bf2c281cb7f7e07b9b04fe842aebabb75a05628341ac76e3203e5a978/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:17:52 compute-0 systemd[1]: Reloading.
Dec 11 09:17:52 compute-0 systemd-rc-local-generator[96857]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:17:52 compute-0 systemd-sysv-generator[96861]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:17:52 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 12.a scrub starts
Dec 11 09:17:52 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 12.a scrub ok
Dec 11 09:17:52 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v112: 353 pgs: 2 active+remapped, 351 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:17:53 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:53 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef500016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:53 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:53 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef70004440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:53 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 12.e scrub starts
Dec 11 09:17:54 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0)
Dec 11 09:17:54 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Dec 11 09:17:54 compute-0 systemd[1]: Starting Ceph grafana.compute-0 for 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060...
Dec 11 09:17:54 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Dec 11 09:17:54 compute-0 podman[96774]: 2025-12-11 09:17:54.518296914 +0000 UTC m=+2.087653678 container init fd464258815e80b9032199b6c6524ef9c03276e8689bef2c0c9af3530c896e76 (image=quay.io/ceph/ceph:v19, name=pensive_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:17:54 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 12.e scrub ok
Dec 11 09:17:54 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Dec 11 09:17:54 compute-0 podman[96774]: 2025-12-11 09:17:54.528387127 +0000 UTC m=+2.097743871 container start fd464258815e80b9032199b6c6524ef9c03276e8689bef2c0c9af3530c896e76 (image=quay.io/ceph/ceph:v19, name=pensive_carver, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 11 09:17:54 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Dec 11 09:17:54 compute-0 ceph-mon[74426]: osdmap e94: 3 total, 3 up, 3 in
Dec 11 09:17:54 compute-0 ceph-mon[74426]: 12.c scrub starts
Dec 11 09:17:54 compute-0 ceph-mon[74426]: 12.c scrub ok
Dec 11 09:17:54 compute-0 ceph-mon[74426]: 8.3 scrub starts
Dec 11 09:17:54 compute-0 ceph-mon[74426]: 8.3 scrub ok
Dec 11 09:17:54 compute-0 ceph-mon[74426]: 11.f scrub starts
Dec 11 09:17:54 compute-0 ceph-mon[74426]: 11.f scrub ok
Dec 11 09:17:54 compute-0 podman[96774]: 2025-12-11 09:17:54.533576979 +0000 UTC m=+2.102933743 container attach fd464258815e80b9032199b6c6524ef9c03276e8689bef2c0c9af3530c896e76 (image=quay.io/ceph/ceph:v19, name=pensive_carver, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True)
Dec 11 09:17:54 compute-0 pensive_carver[96826]: ERROR: invalid flag --daemon-type
Dec 11 09:17:54 compute-0 systemd[1]: libpod-fd464258815e80b9032199b6c6524ef9c03276e8689bef2c0c9af3530c896e76.scope: Deactivated successfully.
Dec 11 09:17:54 compute-0 podman[96774]: 2025-12-11 09:17:54.603979963 +0000 UTC m=+2.173336707 container died fd464258815e80b9032199b6c6524ef9c03276e8689bef2c0c9af3530c896e76 (image=quay.io/ceph/ceph:v19, name=pensive_carver, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec 11 09:17:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-0652ae7bf2c281cb7f7e07b9b04fe842aebabb75a05628341ac76e3203e5a978-merged.mount: Deactivated successfully.
Dec 11 09:17:54 compute-0 podman[96774]: 2025-12-11 09:17:54.649967111 +0000 UTC m=+2.219323855 container remove fd464258815e80b9032199b6c6524ef9c03276e8689bef2c0c9af3530c896e76 (image=quay.io/ceph/ceph:v19, name=pensive_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:17:54 compute-0 systemd[1]: libpod-conmon-fd464258815e80b9032199b6c6524ef9c03276e8689bef2c0c9af3530c896e76.scope: Deactivated successfully.
Dec 11 09:17:54 compute-0 sudo[96756]: pam_unix(sudo:session): session closed for user root
Dec 11 09:17:54 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:54 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef54004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:54 compute-0 podman[96938]: 2025-12-11 09:17:54.762785394 +0000 UTC m=+0.060054486 container create 3ec8d07fc34e7ed8ee0d4326c60dde172854d4253d34e863a630ec7b70332f71 (image=quay.io/ceph/grafana:10.4.0, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 11 09:17:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b95a6393647c0ebf89aaca9c372d34aa00c35f8f47a1e8d537be1f14ced730f1/merged/etc/grafana/certs supports timestamps until 2038 (0x7fffffff)
Dec 11 09:17:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b95a6393647c0ebf89aaca9c372d34aa00c35f8f47a1e8d537be1f14ced730f1/merged/etc/grafana/grafana.ini supports timestamps until 2038 (0x7fffffff)
Dec 11 09:17:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b95a6393647c0ebf89aaca9c372d34aa00c35f8f47a1e8d537be1f14ced730f1/merged/etc/grafana/provisioning/datasources supports timestamps until 2038 (0x7fffffff)
Dec 11 09:17:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b95a6393647c0ebf89aaca9c372d34aa00c35f8f47a1e8d537be1f14ced730f1/merged/etc/grafana/provisioning/dashboards supports timestamps until 2038 (0x7fffffff)
Dec 11 09:17:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b95a6393647c0ebf89aaca9c372d34aa00c35f8f47a1e8d537be1f14ced730f1/merged/var/lib/grafana/grafana.db supports timestamps until 2038 (0x7fffffff)
Dec 11 09:17:54 compute-0 podman[96938]: 2025-12-11 09:17:54.822509947 +0000 UTC m=+0.119779059 container init 3ec8d07fc34e7ed8ee0d4326c60dde172854d4253d34e863a630ec7b70332f71 (image=quay.io/ceph/grafana:10.4.0, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 11 09:17:54 compute-0 podman[96938]: 2025-12-11 09:17:54.738580232 +0000 UTC m=+0.035849344 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Dec 11 09:17:54 compute-0 podman[96938]: 2025-12-11 09:17:54.831890389 +0000 UTC m=+0.129159481 container start 3ec8d07fc34e7ed8ee0d4326c60dde172854d4253d34e863a630ec7b70332f71 (image=quay.io/ceph/grafana:10.4.0, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 11 09:17:54 compute-0 bash[96938]: 3ec8d07fc34e7ed8ee0d4326c60dde172854d4253d34e863a630ec7b70332f71
Dec 11 09:17:54 compute-0 systemd[1]: Started Ceph grafana.compute-0 for 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060.
Dec 11 09:17:54 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 12.8 scrub starts
Dec 11 09:17:54 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 12.8 scrub ok
Dec 11 09:17:54 compute-0 sudo[96340]: pam_unix(sudo:session): session closed for user root
Dec 11 09:17:54 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 11 09:17:54 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:54 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 11 09:17:54 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v114: 353 pgs: 2 unknown, 2 active+remapped, 349 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 58 B/s, 1 objects/s recovering
Dec 11 09:17:54 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:54 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Dec 11 09:17:54 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:54 compute-0 ceph-mgr[74715]: [progress INFO root] complete: finished ev bf6fdf63-6c2c-4774-94dc-9ab99b149cb9 (Updating grafana deployment (+1 -> 1))
Dec 11 09:17:54 compute-0 ceph-mgr[74715]: [progress INFO root] Completed event bf6fdf63-6c2c-4774-94dc-9ab99b149cb9 (Updating grafana deployment (+1 -> 1)) in 16 seconds
Dec 11 09:17:54 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Dec 11 09:17:54 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:54 compute-0 ceph-mgr[74715]: [progress INFO root] update: starting ev a8591716-c9f2-4037-a32c-485a8bbc4e94 (Updating ingress.rgw.default deployment (+4 -> 4))
Dec 11 09:17:54 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/monitor_password}] v 0)
Dec 11 09:17:54 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:54 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-0.paephv on compute-0
Dec 11 09:17:54 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-0.paephv on compute-0
Dec 11 09:17:55 compute-0 sudo[96973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:17:55 compute-0 sudo[96973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:55 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef60003490 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:55 compute-0 sudo[96973]: pam_unix(sudo:session): session closed for user root
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=settings t=2025-12-11T09:17:55.071162417Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2025-12-11T09:17:55Z
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=settings t=2025-12-11T09:17:55.07162157Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=settings t=2025-12-11T09:17:55.071667532Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=settings t=2025-12-11T09:17:55.071672122Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=settings t=2025-12-11T09:17:55.071675732Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=settings t=2025-12-11T09:17:55.071686492Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=settings t=2025-12-11T09:17:55.071689933Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=settings t=2025-12-11T09:17:55.071697633Z level=info msg="Config overridden from command line" arg="default.log.mode=console"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=settings t=2025-12-11T09:17:55.071702433Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=settings t=2025-12-11T09:17:55.071706103Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=settings t=2025-12-11T09:17:55.071709383Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=settings t=2025-12-11T09:17:55.071712993Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=settings t=2025-12-11T09:17:55.071722004Z level=info msg=Target target=[all]
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=settings t=2025-12-11T09:17:55.071730954Z level=info msg="Path Home" path=/usr/share/grafana
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=settings t=2025-12-11T09:17:55.071734314Z level=info msg="Path Data" path=/var/lib/grafana
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=settings t=2025-12-11T09:17:55.071737584Z level=info msg="Path Logs" path=/var/log/grafana
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=settings t=2025-12-11T09:17:55.071740834Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=settings t=2025-12-11T09:17:55.071744144Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=settings t=2025-12-11T09:17:55.071747464Z level=info msg="App mode production"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=sqlstore t=2025-12-11T09:17:55.072134627Z level=info msg="Connecting to DB" dbtype=sqlite3
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=sqlstore t=2025-12-11T09:17:55.072155877Z level=warn msg="SQLite database file has broader permissions than it should" path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r-----
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.073032645Z level=info msg="Starting DB migrations"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.075690967Z level=info msg="Executing migration" id="create migration_log table"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.077377169Z level=info msg="Migration successfully executed" id="create migration_log table" duration=1.685452ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.079460865Z level=info msg="Executing migration" id="create user table"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.080447735Z level=info msg="Migration successfully executed" id="create user table" duration=986.16µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.082495918Z level=info msg="Executing migration" id="add unique index user.login"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.083307063Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=800.945µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.085375237Z level=info msg="Executing migration" id="add unique index user.email"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.086299706Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=927.459µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.087932448Z level=info msg="Executing migration" id="drop index UQE_user_login - v1"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.088649259Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=716.821µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.09058684Z level=info msg="Executing migration" id="drop index UQE_user_email - v1"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.09124692Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=660.22µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.093448149Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.095891885Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=2.443745ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.098105203Z level=info msg="Executing migration" id="create user table v2"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.098944019Z level=info msg="Migration successfully executed" id="create user table v2" duration=838.306µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.101808808Z level=info msg="Executing migration" id="create index UQE_user_login - v2"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.102457658Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=660.2µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.104224343Z level=info msg="Executing migration" id="create index UQE_user_email - v2"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.104832231Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=617.068µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.106632118Z level=info msg="Executing migration" id="copy data_source v1 to v2"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.106983129Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=351.051µs
Dec 11 09:17:55 compute-0 sudo[96998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/haproxy:2.3 --timeout 895 _orch deploy --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.108763344Z level=info msg="Executing migration" id="Drop old table user_v1"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.10928668Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=522.826µs
Dec 11 09:17:55 compute-0 sudo[96998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.111014193Z level=info msg="Executing migration" id="Add column help_flags1 to user table"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.112266673Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.25221ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.114160501Z level=info msg="Executing migration" id="Update user table charset"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.114201072Z level=info msg="Migration successfully executed" id="Update user table charset" duration=41.941µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.115993838Z level=info msg="Executing migration" id="Add last_seen_at column to user"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.116972848Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=979.04µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.118834786Z level=info msg="Executing migration" id="Add missing user data"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.119138496Z level=info msg="Migration successfully executed" id="Add missing user data" duration=303.619µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.120935262Z level=info msg="Executing migration" id="Add is_disabled column to user"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.122161579Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.226297ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.123793571Z level=info msg="Executing migration" id="Add index user.login/user.email"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.124503282Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=710.591µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.12602592Z level=info msg="Executing migration" id="Add is_service_account column to user"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.127108164Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.081744ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.12859952Z level=info msg="Executing migration" id="Update is_service_account column to nullable"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.136379911Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=7.777302ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.139100645Z level=info msg="Executing migration" id="Add uid column to user"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.140457458Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=1.356253ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.142552603Z level=info msg="Executing migration" id="Update uid column values for users"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.142726528Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=177.035µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.14441649Z level=info msg="Executing migration" id="Add unique index user_uid"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.145025429Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=609.119µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.147017371Z level=info msg="Executing migration" id="create temp user table v1-7"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.147763154Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=731.233µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.149418056Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.149996673Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=578.407µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.151516001Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.152148131Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=631.95µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.153515283Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.154076211Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=560.797µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.155401001Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.156080922Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=686.881µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.157815637Z level=info msg="Executing migration" id="Update temp_user table charset"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.157875149Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=60.702µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.159680704Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.160341135Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=660.411µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.161820541Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.162494222Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=683.052µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.164203805Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.164927057Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=723.143µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.166581948Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.167224078Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=642.07µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.169148549Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.172280695Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=3.131336ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.174155074Z level=info msg="Executing migration" id="create temp_user v2"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.174862195Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=706.621µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.176943081Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.177802337Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=861.966µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.181001556Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.181848093Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=846.576µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.194895517Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.195855997Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=971.01µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.198723836Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.199528952Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=805.486µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.201521653Z level=info msg="Executing migration" id="copy temp_user v1 to v2"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.202269807Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=744.584µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.203914267Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.204576428Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=662.291µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.206045103Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.206508278Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=462.925µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.208225912Z level=info msg="Executing migration" id="create star table"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.208871061Z level=info msg="Migration successfully executed" id="create star table" duration=645.019µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.210527463Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.211193683Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=665.59µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.212777763Z level=info msg="Executing migration" id="create org table v1"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.213450074Z level=info msg="Migration successfully executed" id="create org table v1" duration=672.141µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.21496031Z level=info msg="Executing migration" id="create index UQE_org_name - v1"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.215644862Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=683.872µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.217113917Z level=info msg="Executing migration" id="create org_user table v1"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.217702995Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=588.668µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.219570423Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.220275075Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=704.422µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.22168914Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.22238541Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=695.48µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.223748213Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.224453095Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=704.292µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.225961562Z level=info msg="Executing migration" id="Update org table charset"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.225988763Z level=info msg="Migration successfully executed" id="Update org table charset" duration=28.151µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.227479529Z level=info msg="Executing migration" id="Update org_user table charset"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.22749984Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=28.221µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.229377818Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.229532163Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=154.365µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.231287857Z level=info msg="Executing migration" id="create dashboard table"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.231997939Z level=info msg="Migration successfully executed" id="create dashboard table" duration=709.842µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.233482116Z level=info msg="Executing migration" id="add index dashboard.account_id"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.234207108Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=724.912µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.235585701Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.236359955Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=781.084µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.237844341Z level=info msg="Executing migration" id="create dashboard_tag table"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.238433969Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=589.398µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.23976442Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.2403909Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=615.029µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.241812944Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.242439863Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=622.089µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.244017473Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.248557823Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=4.53999ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.250084511Z level=info msg="Executing migration" id="create dashboard v2"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.250746211Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=665.77µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.252196736Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.252800735Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=603.719µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.254348803Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.255068826Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=752.304µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.256573892Z level=info msg="Executing migration" id="copy dashboard v1 to v2"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.256911243Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=337.27µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.260135363Z level=info msg="Executing migration" id="drop table dashboard_v1"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.261455064Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=1.319371ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.263018072Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.263082544Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=65.212µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.264722045Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.266496611Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=1.772865ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.268219574Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.269563996Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.344212ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.271103323Z level=info msg="Executing migration" id="Add column gnetId in dashboard"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.272483456Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.380053ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.273933491Z level=info msg="Executing migration" id="Add index for gnetId in dashboard"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.274620093Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=686.262µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.27649382Z level=info msg="Executing migration" id="Add column plugin_id in dashboard"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.278396119Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=1.900549ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.279947298Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.280726852Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=778.964µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.282245609Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.28290738Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=661.651µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.285017635Z level=info msg="Executing migration" id="Update dashboard table charset"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.285048926Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=32.201µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.286710718Z level=info msg="Executing migration" id="Update dashboard_tag table charset"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.286731209Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=21.071µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.288434171Z level=info msg="Executing migration" id="Add column folder_id in dashboard"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.290099403Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=1.664662ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.291642451Z level=info msg="Executing migration" id="Add column isFolder in dashboard"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.293113546Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=1.470425ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.294642814Z level=info msg="Executing migration" id="Add column has_acl in dashboard"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.296162901Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=1.519667ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.297776262Z level=info msg="Executing migration" id="Add column uid in dashboard"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.299476264Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=1.699732ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.300870637Z level=info msg="Executing migration" id="Update uid column values in dashboard"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.301067653Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=196.966µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.302596401Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.303217261Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=622.779µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.305198722Z level=info msg="Executing migration" id="Remove unique index org_id_slug"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.305992957Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=794.195µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.30773357Z level=info msg="Executing migration" id="Update dashboard title length"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.307767181Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=29.601µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.309456694Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.310120594Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=660.881µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.312031873Z level=info msg="Executing migration" id="create dashboard_provisioning"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.313013694Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=985.221µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.314887972Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.319563508Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=4.671716ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.321332192Z level=info msg="Executing migration" id="create dashboard_provisioning v2"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.322046345Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=730.823µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.323613873Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.324435929Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=821.986µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.326115741Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.326801892Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=685.991µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.328244607Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.328561397Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=327.21µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.329972231Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.330627591Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=655.43µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.332263252Z level=info msg="Executing migration" id="Add check_sum column"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.334095859Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=1.828287ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.335631206Z level=info msg="Executing migration" id="Add index for dashboard_title"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.336406251Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=774.675µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.338252658Z level=info msg="Executing migration" id="delete tags for deleted dashboards"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.338420043Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=167.595µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.340118275Z level=info msg="Executing migration" id="delete stars for deleted dashboards"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.340394654Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=279.609µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.342148989Z level=info msg="Executing migration" id="Add index for dashboard_is_folder"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.34284422Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=695.141µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.344250254Z level=info msg="Executing migration" id="Add isPublic for dashboard"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.346189874Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=1.93896ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.347886917Z level=info msg="Executing migration" id="create data_source table"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.348812895Z level=info msg="Migration successfully executed" id="create data_source table" duration=923.138µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.35088658Z level=info msg="Executing migration" id="add index data_source.account_id"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.35155785Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=671.13µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.353156291Z level=info msg="Executing migration" id="add unique index data_source.account_id_name"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.353932935Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=776.044µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.355815233Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.356636119Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=820.286µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.358343342Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.359076815Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=733.653µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.360597571Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.364940187Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=4.342306ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.366500155Z level=info msg="Executing migration" id="create data_source table v2"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.367252628Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=752.173µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.370743366Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.371602043Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=859.106µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.373536864Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.374207714Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=670.55µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.376796674Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:55 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef500016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.37762308Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=825.946µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.379608591Z level=info msg="Executing migration" id="Add column with_credentials"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.381389987Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=1.780686ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.383298246Z level=info msg="Executing migration" id="Add secure json data column"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.385558996Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.26077ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.387671671Z level=info msg="Executing migration" id="Update data_source table charset"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.387691592Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=20.751µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.390039066Z level=info msg="Executing migration" id="Update initial version to 1"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.390639094Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=598.998µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.393097811Z level=info msg="Executing migration" id="Add read_only data column"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.397170247Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=4.066915ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.399875071Z level=info msg="Executing migration" id="Migrate logging ds to loki ds"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.400393597Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=520.897µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.403242505Z level=info msg="Executing migration" id="Update json_data with nulls"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.403806243Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=565.268µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.406265159Z level=info msg="Executing migration" id="Add uid column"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.409166649Z level=info msg="Migration successfully executed" id="Add uid column" duration=2.90378ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.411634996Z level=info msg="Executing migration" id="Update uid value"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.412055759Z level=info msg="Migration successfully executed" id="Update uid value" duration=422.013µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.413830494Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.414822705Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=992.161µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.416549758Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.417498558Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=949.02µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.419086947Z level=info msg="Executing migration" id="create api_key table"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.420047117Z level=info msg="Migration successfully executed" id="create api_key table" duration=959.45µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.421903874Z level=info msg="Executing migration" id="add index api_key.account_id"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.422894775Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=990.511µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.424825275Z level=info msg="Executing migration" id="add index api_key.key"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.425794796Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=969.08µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.428371305Z level=info msg="Executing migration" id="add index api_key.account_id_name"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.430064548Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=1.694262ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.433125163Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.434329981Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=1.208557ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.436939601Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.438124378Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=1.185297ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.440373258Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.441284016Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=911.348µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.445646592Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.451460702Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=5.81195ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.453899598Z level=info msg="Executing migration" id="create api_key table v2"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.454803505Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=907.537µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.456691075Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.457554581Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=863.946µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.459288835Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.460140532Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=851.647µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.462442473Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.463225067Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=782.604µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.465157038Z level=info msg="Executing migration" id="copy api_key v1 to v2"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.465734586Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=577.649µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.467286373Z level=info msg="Executing migration" id="Drop old table api_key_v1"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.467967305Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=680.762µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.470128202Z level=info msg="Executing migration" id="Update api_key table charset"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.470225765Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=99.284µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.472134924Z level=info msg="Executing migration" id="Add expires to api_key table"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.474867758Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=2.728944ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.477685606Z level=info msg="Executing migration" id="Add service account foreign key"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.480518384Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=2.832108ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.482574018Z level=info msg="Executing migration" id="set service account foreign key to nil if 0"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.482836946Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=263.468µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.484755386Z level=info msg="Executing migration" id="Add last_used_at to api_key table"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.486969924Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=2.214158ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.488737619Z level=info msg="Executing migration" id="Add is_revoked column to api_key table"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.492796285Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=4.052306ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.494666953Z level=info msg="Executing migration" id="create dashboard_snapshot table v4"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.495357075Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=692.362µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.497111489Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.497682797Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=570.998µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.499863734Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.501149825Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=1.29057ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.503223919Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.50423115Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=1.007321ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.506533002Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.507533053Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=1.000521ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.509429112Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.510377961Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=949.479µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.512205498Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.51227128Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=66.972µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.514062935Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.514092036Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=30.341µs
Dec 11 09:17:55 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.518021748Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.520926479Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=2.90353ms
Dec 11 09:17:55 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Dec 11 09:17:55 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.524398817Z level=info msg="Executing migration" id="Add encrypted dashboard json column"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.527174503Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.776487ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.52903625Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.529106723Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=71.912µs
Dec 11 09:17:55 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.531746664Z level=info msg="Executing migration" id="create quota table v1"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.53288761Z level=info msg="Migration successfully executed" id="create quota table v1" duration=1.147716ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.537127231Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.538059491Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=934.81µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.540377032Z level=info msg="Executing migration" id="Update quota table charset"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.540434394Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=55.992µs
Dec 11 09:17:55 compute-0 ceph-mon[74426]: 12.a scrub starts
Dec 11 09:17:55 compute-0 ceph-mon[74426]: 12.a scrub ok
Dec 11 09:17:55 compute-0 ceph-mon[74426]: pgmap v112: 353 pgs: 2 active+remapped, 351 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:17:55 compute-0 ceph-mon[74426]: 12.7 scrub starts
Dec 11 09:17:55 compute-0 ceph-mon[74426]: 12.7 scrub ok
Dec 11 09:17:55 compute-0 ceph-mon[74426]: 11.12 deep-scrub starts
Dec 11 09:17:55 compute-0 ceph-mon[74426]: 11.12 deep-scrub ok
Dec 11 09:17:55 compute-0 ceph-mon[74426]: 12.e scrub starts
Dec 11 09:17:55 compute-0 ceph-mon[74426]: 12.17 scrub starts
Dec 11 09:17:55 compute-0 ceph-mon[74426]: 12.17 scrub ok
Dec 11 09:17:55 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Dec 11 09:17:55 compute-0 ceph-mon[74426]: 8.17 scrub starts
Dec 11 09:17:55 compute-0 ceph-mon[74426]: 8.17 scrub ok
Dec 11 09:17:55 compute-0 ceph-mon[74426]: 12.e scrub ok
Dec 11 09:17:55 compute-0 ceph-mon[74426]: osdmap e95: 3 total, 3 up, 3 in
Dec 11 09:17:55 compute-0 ceph-mon[74426]: 12.8 scrub starts
Dec 11 09:17:55 compute-0 ceph-mon[74426]: 12.8 scrub ok
Dec 11 09:17:55 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:55 compute-0 ceph-mon[74426]: pgmap v114: 353 pgs: 2 unknown, 2 active+remapped, 349 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 58 B/s, 1 objects/s recovering
Dec 11 09:17:55 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:55 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:55 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:55 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:55 compute-0 ceph-mon[74426]: Deploying daemon haproxy.rgw.default.compute-0.paephv on compute-0
Dec 11 09:17:55 compute-0 ceph-mon[74426]: 8.11 scrub starts
Dec 11 09:17:55 compute-0 ceph-mon[74426]: 8.11 scrub ok
Dec 11 09:17:55 compute-0 ceph-mon[74426]: 11.1 deep-scrub starts
Dec 11 09:17:55 compute-0 ceph-mon[74426]: 11.1 deep-scrub ok
Dec 11 09:17:55 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Dec 11 09:17:55 compute-0 ceph-mon[74426]: osdmap e96: 3 total, 3 up, 3 in
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.543784868Z level=info msg="Executing migration" id="create plugin_setting table"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.54480885Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=1.026271ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.547154992Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.547996029Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=840.987µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.550198607Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.552924381Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=2.723644ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.555027117Z level=info msg="Executing migration" id="Update plugin_setting table charset"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.555060058Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=33.951µs
Dec 11 09:17:55 compute-0 podman[97061]: 2025-12-11 09:17:55.556189143 +0000 UTC m=+0.049819947 container create 990913b4dae593192e88d5838dbb2423e938b217ab4c2f6c29304edb626db733 (image=quay.io/ceph/haproxy:2.3, name=dreamy_heyrovsky)
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.557198295Z level=info msg="Executing migration" id="create session table"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.557985939Z level=info msg="Migration successfully executed" id="create session table" duration=787.315µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.559895468Z level=info msg="Executing migration" id="Drop old table playlist table"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.559993781Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=98.953µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.562029524Z level=info msg="Executing migration" id="Drop old table playlist_item table"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.562106517Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=77.043µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.565069159Z level=info msg="Executing migration" id="create playlist table v2"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.565833913Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=764.394µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.568909028Z level=info msg="Executing migration" id="create playlist item table v2"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.569553837Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=645.489µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.571709425Z level=info msg="Executing migration" id="Update playlist table charset"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.571740626Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=30.931µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.574301956Z level=info msg="Executing migration" id="Update playlist_item table charset"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.574337127Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=35.711µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.576646228Z level=info msg="Executing migration" id="Add playlist column created_at"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.579782806Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=3.136348ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.582334835Z level=info msg="Executing migration" id="Add playlist column updated_at"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.58538528Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=3.046524ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.589187477Z level=info msg="Executing migration" id="drop preferences table v2"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.58928389Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=96.143µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.594940186Z level=info msg="Executing migration" id="drop preferences table v3"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.595020098Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=80.292µs
Dec 11 09:17:55 compute-0 systemd[1]: Started libpod-conmon-990913b4dae593192e88d5838dbb2423e938b217ab4c2f6c29304edb626db733.scope.
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.597604708Z level=info msg="Executing migration" id="create preferences table v3"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.598473246Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=865.947µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.603602295Z level=info msg="Executing migration" id="Update preferences table charset"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.603666887Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=71.123µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.606879457Z level=info msg="Executing migration" id="Add column team_id in preferences"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.611192041Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=4.307185ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.617203677Z level=info msg="Executing migration" id="Update team_id column values in preferences"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.617491476Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=291.789µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.619503989Z level=info msg="Executing migration" id="Add column week_start in preferences"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.622098649Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=2.59117ms
Dec 11 09:17:55 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.624331969Z level=info msg="Executing migration" id="Add column preferences.json_data"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.626807165Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=2.492227ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.630103308Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.630218691Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=119.633µs
Dec 11 09:17:55 compute-0 podman[97061]: 2025-12-11 09:17:55.534971405 +0000 UTC m=+0.028602199 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.632506872Z level=info msg="Executing migration" id="Add preferences index org_id"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.633646808Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=1.140086ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.636391963Z level=info msg="Executing migration" id="Add preferences index user_id"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.637391174Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=1.004581ms
Dec 11 09:17:55 compute-0 podman[97061]: 2025-12-11 09:17:55.639147049 +0000 UTC m=+0.132777863 container init 990913b4dae593192e88d5838dbb2423e938b217ab4c2f6c29304edb626db733 (image=quay.io/ceph/haproxy:2.3, name=dreamy_heyrovsky)
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.641053188Z level=info msg="Executing migration" id="create alert table v1"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.64209681Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.041252ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.64372002Z level=info msg="Executing migration" id="add index alert org_id & id "
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.644513005Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=792.585µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.647296572Z level=info msg="Executing migration" id="add index alert state"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.648018013Z level=info msg="Migration successfully executed" id="add index alert state" duration=721.231µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.649895492Z level=info msg="Executing migration" id="add index alert dashboard_id"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.650663376Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=766.534µs
Dec 11 09:17:55 compute-0 podman[97061]: 2025-12-11 09:17:55.650979555 +0000 UTC m=+0.144610349 container start 990913b4dae593192e88d5838dbb2423e938b217ab4c2f6c29304edb626db733 (image=quay.io/ceph/haproxy:2.3, name=dreamy_heyrovsky)
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.653626607Z level=info msg="Executing migration" id="Create alert_rule_tag table v1"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.654801044Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=1.179287ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.656905999Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id"
Dec 11 09:17:55 compute-0 podman[97061]: 2025-12-11 09:17:55.656957441 +0000 UTC m=+0.150588265 container attach 990913b4dae593192e88d5838dbb2423e938b217ab4c2f6c29304edb626db733 (image=quay.io/ceph/haproxy:2.3, name=dreamy_heyrovsky)
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.657699154Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=792.625µs
Dec 11 09:17:55 compute-0 dreamy_heyrovsky[97076]: 0 0
Dec 11 09:17:55 compute-0 systemd[1]: libpod-990913b4dae593192e88d5838dbb2423e938b217ab4c2f6c29304edb626db733.scope: Deactivated successfully.
Dec 11 09:17:55 compute-0 podman[97061]: 2025-12-11 09:17:55.659343995 +0000 UTC m=+0.152974789 container died 990913b4dae593192e88d5838dbb2423e938b217ab4c2f6c29304edb626db733 (image=quay.io/ceph/haproxy:2.3, name=dreamy_heyrovsky)
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.660287865Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.661171512Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=883.387µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.664232057Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.672707811Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=8.469303ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.675682623Z level=info msg="Executing migration" id="Create alert_rule_tag table v2"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.676943122Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=1.258978ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.679586064Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.680625886Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=1.040643ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.682738712Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.683084502Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=347.061µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.685620541Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.686401735Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=781.934µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.688356206Z level=info msg="Executing migration" id="create alert_notification table v1"
Dec 11 09:17:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-e82f99fbed177f59ed7804cc4ceb25f765a56f56a27d8ecbf75e3929f965a9c5-merged.mount: Deactivated successfully.
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.689340937Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=978.181µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.691614537Z level=info msg="Executing migration" id="Add column is_default"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.694517467Z level=info msg="Migration successfully executed" id="Add column is_default" duration=2.90242ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.696519449Z level=info msg="Executing migration" id="Add column frequency"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.6997473Z level=info msg="Migration successfully executed" id="Add column frequency" duration=3.22563ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.702789484Z level=info msg="Executing migration" id="Add column send_reminder"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.706425607Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=3.633243ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.710564065Z level=info msg="Executing migration" id="Add column disable_resolve_message"
Dec 11 09:17:55 compute-0 podman[97061]: 2025-12-11 09:17:55.711220546 +0000 UTC m=+0.204851340 container remove 990913b4dae593192e88d5838dbb2423e938b217ab4c2f6c29304edb626db733 (image=quay.io/ceph/haproxy:2.3, name=dreamy_heyrovsky)
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.713812336Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=3.247441ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.715578051Z level=info msg="Executing migration" id="add index alert_notification org_id & name"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.716292303Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=714.692µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.718204462Z level=info msg="Executing migration" id="Update alert table charset"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.718235763Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=33.511µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.721781133Z level=info msg="Executing migration" id="Update alert_notification table charset"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.721822765Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=44.822µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.723475147Z level=info msg="Executing migration" id="create notification_journal table v1"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.724366184Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=890.227µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.727262224Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.72841514Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=1.152737ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.73068759Z level=info msg="Executing migration" id="drop alert_notification_journal"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.731885208Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=1.198228ms
Dec 11 09:17:55 compute-0 systemd[1]: libpod-conmon-990913b4dae593192e88d5838dbb2423e938b217ab4c2f6c29304edb626db733.scope: Deactivated successfully.
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.734664454Z level=info msg="Executing migration" id="create alert_notification_state table v1"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.735569971Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=905.417µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.737060668Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.737838222Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=777.264µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.739590177Z level=info msg="Executing migration" id="Add for to alert table"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.742404904Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=2.817398ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.744744066Z level=info msg="Executing migration" id="Add column uid in alert_notification"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.747630356Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=2.88083ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.749281157Z level=info msg="Executing migration" id="Update uid column values in alert_notification"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.749450272Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=169.525µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.751816996Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.752505617Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=688.331µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.754401116Z level=info msg="Executing migration" id="Remove unique index org_id_name"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.755138869Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=738.193µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.757082449Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.759901836Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=2.819366ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.761399403Z level=info msg="Executing migration" id="alter alert.settings to mediumtext"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.761456805Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=59.442µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.763449947Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.764205301Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=755.564µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.765852811Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.766647486Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=797.515µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.768281697Z level=info msg="Executing migration" id="Drop old annotation table v4"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.76838246Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=101.002µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.833330003Z level=info msg="Executing migration" id="create annotation table v5"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.834517511Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=1.207328ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.836257624Z level=info msg="Executing migration" id="add index annotation 0 v3"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.837111771Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=853.677µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.838787913Z level=info msg="Executing migration" id="add index annotation 1 v3"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.839549937Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=762.914µs
Dec 11 09:17:55 compute-0 systemd[1]: Reloading.
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.841242969Z level=info msg="Executing migration" id="add index annotation 2 v3"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.841919821Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=676.912µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.843568752Z level=info msg="Executing migration" id="add index annotation 3 v3"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.844335226Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=765.924µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.845830043Z level=info msg="Executing migration" id="add index annotation 4 v3"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.846686209Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=855.466µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.848287599Z level=info msg="Executing migration" id="Update annotation table charset"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.84830892Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=22.081µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.849775825Z level=info msg="Executing migration" id="Add column region_id to annotation table"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.852897392Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=3.124117ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.854588695Z level=info msg="Executing migration" id="Drop category_id index"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.855502253Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=912.548µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.857440154Z level=info msg="Executing migration" id="Add column tags to annotation table"
Dec 11 09:17:55 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 12.1c scrub starts
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.861359595Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=3.911711ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.863847493Z level=info msg="Executing migration" id="Create annotation_tag table v2"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.864566385Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=719.532µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.866426123Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.867177146Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=750.763µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.868963763Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2"
Dec 11 09:17:55 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 12.1c scrub ok
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.869781697Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=817.494µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.871436239Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.880254063Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=8.810164ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.88302858Z level=info msg="Executing migration" id="Create annotation_tag table v3"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.884138884Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=1.103374ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.886346672Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.887152177Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=805.625µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.889285933Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.889614173Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=328.69µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.891653786Z level=info msg="Executing migration" id="drop table annotation_tag_v2"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.892228905Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=575.628µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.893745042Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.893905717Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=161.395µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.895643581Z level=info msg="Executing migration" id="Add created time to annotation table"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.89883478Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=3.188199ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.902531655Z level=info msg="Executing migration" id="Add updated time to annotation table"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.937435068Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=34.899603ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.940985178Z level=info msg="Executing migration" id="Add index for created in annotation table"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.942451394Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=1.467005ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.946279963Z level=info msg="Executing migration" id="Add index for updated in annotation table"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.947738558Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=1.456094ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.950297657Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.950891955Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=595.488µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.953742364Z level=info msg="Executing migration" id="Add epoch_end column"
Dec 11 09:17:55 compute-0 systemd-rc-local-generator[97128]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.958352168Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=4.603713ms
Dec 11 09:17:55 compute-0 systemd-sysv-generator[97131]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.96003818Z level=info msg="Executing migration" id="Add index for epoch_end"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.960817654Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=779.254µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.962563908Z level=info msg="Executing migration" id="Make epoch_end the same as epoch"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.962761634Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=197.946µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.965715496Z level=info msg="Executing migration" id="Move region to single row"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.966232272Z level=info msg="Migration successfully executed" id="Move region to single row" duration=521.726µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.968296466Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.969223935Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=928.309µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.970925448Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.972006332Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=1.079994ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.973600931Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.974488258Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=1.018372ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.97584677Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.976608304Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=761.374µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.979304568Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.981896408Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=2.594691ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.983979852Z level=info msg="Executing migration" id="Add index for alert_id on annotation table"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.985075987Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=1.096785ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.987274205Z level=info msg="Executing migration" id="Increase tags column to length 4096"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.987371898Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=99.343µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.989267468Z level=info msg="Executing migration" id="create test_data table"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.990272178Z level=info msg="Migration successfully executed" id="create test_data table" duration=1.003951ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.991963211Z level=info msg="Executing migration" id="create dashboard_version table v1"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.992989922Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=1.023441ms
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.994966644Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.995851621Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=887.187µs
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.997678059Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version"
Dec 11 09:17:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:55.998502003Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=823.904µs
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.000200167Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.000426484Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=227.677µs
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.002283161Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.002753006Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=470.075µs
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.004094107Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.004151319Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=58.072µs
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.006011387Z level=info msg="Executing migration" id="create team table"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.00676689Z level=info msg="Migration successfully executed" id="create team table" duration=755.173µs
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.008542356Z level=info msg="Executing migration" id="add index team.org_id"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.009497915Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=955.119µs
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.01159244Z level=info msg="Executing migration" id="add unique index team_org_id_name"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.012599981Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=1.089833ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.014615294Z level=info msg="Executing migration" id="Add column uid in team"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.018738262Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=4.118918ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.021222569Z level=info msg="Executing migration" id="Update uid column values in team"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.021522158Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=298.729µs
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.024104298Z level=info msg="Executing migration" id="Add unique index team_org_id_uid"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.025210093Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=1.107405ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.027305808Z level=info msg="Executing migration" id="create team member table"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.028040881Z level=info msg="Migration successfully executed" id="create team member table" duration=734.923µs
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.030127496Z level=info msg="Executing migration" id="add index team_member.org_id"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.031095096Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=968.201µs
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.033459559Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.034584363Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=1.123974ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.036480463Z level=info msg="Executing migration" id="add index team_member.team_id"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.037433572Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=952.859µs
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.040655872Z level=info msg="Executing migration" id="Add column email to team table"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.045775971Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=5.118479ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.048739003Z level=info msg="Executing migration" id="Add column external to team_member table"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.053866712Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=5.120789ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.056641638Z level=info msg="Executing migration" id="Add column permission to team_member table"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.060335533Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=3.692105ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.063668737Z level=info msg="Executing migration" id="create dashboard acl table"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.065134002Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=1.468615ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.067655121Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.068584799Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=933.018µs
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.070838319Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.071931953Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.093404ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.074198773Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.07504401Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=844.657µs
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.077022361Z level=info msg="Executing migration" id="add index dashboard_acl_user_id"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.078080944Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=1.059313ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.080393546Z level=info msg="Executing migration" id="add index dashboard_acl_team_id"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.081396057Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=1.002791ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.084951767Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.086118124Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=1.168097ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.089188809Z level=info msg="Executing migration" id="add index dashboard_permission"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.090582332Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.396723ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.092875854Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.093632887Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=705.911µs
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.096573338Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.096857917Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=281.959µs
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.100341785Z level=info msg="Executing migration" id="create tag table"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.101299135Z level=info msg="Migration successfully executed" id="create tag table" duration=957.51µs
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.103293437Z level=info msg="Executing migration" id="add index tag.key_value"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.104127943Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=831.936µs
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.106628061Z level=info msg="Executing migration" id="create login attempt table"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.107369003Z level=info msg="Migration successfully executed" id="create login attempt table" duration=740.202µs
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.110056356Z level=info msg="Executing migration" id="add index login_attempt.username"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.11080852Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=749.204µs
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.113907746Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.114891767Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=983.631µs
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.117069275Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.129122949Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=12.045425ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.133400451Z level=info msg="Executing migration" id="create login_attempt v2"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.134385002Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=986.131µs
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.14042479Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.145307711Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=4.882651ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.148773919Z level=info msg="Executing migration" id="copy login_attempt v1 to v2"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.149270814Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=496.895µs
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.152392171Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.153306329Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=916.899µs
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.156284092Z level=info msg="Executing migration" id="create user auth table"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.157107028Z level=info msg="Migration successfully executed" id="create user auth table" duration=820.876µs
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.159292085Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.160229234Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=938.509µs
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.162376751Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.162465914Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=91.073µs
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.165050394Z level=info msg="Executing migration" id="Add OAuth access token to user_auth"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.169375168Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=4.321624ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.171743791Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.176138318Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=4.387627ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.181544396Z level=info msg="Executing migration" id="Add OAuth token type to user_auth"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.186003394Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=4.451588ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.18843917Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.19261942Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=4.15758ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.195424897Z level=info msg="Executing migration" id="Add index to user_id column in user_auth"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.196411138Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=987.311µs
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.201070692Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth"
Dec 11 09:17:56 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e96 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:17:56 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.20811495Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=7.036798ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.212472176Z level=info msg="Executing migration" id="create server_lock table"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.21358702Z level=info msg="Migration successfully executed" id="create server_lock table" duration=1.118604ms
Dec 11 09:17:56 compute-0 systemd[1]: Reloading.
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.217474931Z level=info msg="Executing migration" id="add index server_lock.operation_uid"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.21870483Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=1.232879ms
Dec 11 09:17:56 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.22258506Z level=info msg="Executing migration" id="create user auth token table"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.22387674Z level=info msg="Migration successfully executed" id="create user auth token table" duration=1.296421ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.227362139Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token"
Dec 11 09:17:56 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.228993539Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=1.632251ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.232686694Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token"
Dec 11 09:17:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 96 pg[10.d( v 56'1088 (0'0,56'1088] local-lis/les=78/79 n=8 ec=61/50 lis/c=78/78 les/c/f=79/79/0 sis=96 pruub=11.011159897s) [0] r=-1 lpr=96 pi=[78,96)/1 crt=56'1088 mlcod 0'0 active pruub 228.370361328s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 97 pg[10.d( v 56'1088 (0'0,56'1088] local-lis/les=78/79 n=8 ec=61/50 lis/c=78/78 les/c/f=79/79/0 sis=96 pruub=11.011104584s) [0] r=-1 lpr=96 pi=[78,96)/1 crt=56'1088 mlcod 0'0 unknown NOTIFY pruub 228.370361328s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 96 pg[10.1d( v 56'1088 (0'0,56'1088] local-lis/les=78/79 n=5 ec=61/50 lis/c=78/78 les/c/f=79/79/0 sis=96 pruub=11.009754181s) [0] r=-1 lpr=96 pi=[78,96)/1 crt=56'1088 mlcod 0'0 active pruub 228.370483398s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 97 pg[10.1d( v 56'1088 (0'0,56'1088] local-lis/les=78/79 n=5 ec=61/50 lis/c=78/78 les/c/f=79/79/0 sis=96 pruub=11.009698868s) [0] r=-1 lpr=96 pi=[78,96)/1 crt=56'1088 mlcod 0'0 unknown NOTIFY pruub 228.370483398s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.235744409Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=3.059455ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.238648048Z level=info msg="Executing migration" id="add index user_auth_token.user_id"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.240110724Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.467336ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.243366545Z level=info msg="Executing migration" id="Add revoked_at to the user auth token"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.24867769Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=5.314345ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.251085365Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.252129747Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=1.046002ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.254290104Z level=info msg="Executing migration" id="create cache_data table"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.255289746Z level=info msg="Migration successfully executed" id="create cache_data table" duration=998.371µs
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.258076302Z level=info msg="Executing migration" id="add unique index cache_data.cache_key"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.25901096Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=935.188µs
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.261610131Z level=info msg="Executing migration" id="create short_url table v1"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.263057687Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=1.443405ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.267513044Z level=info msg="Executing migration" id="add index short_url.org_id-uid"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.269084924Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=1.5741ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.272688475Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.272891191Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=207.766µs
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.275613816Z level=info msg="Executing migration" id="delete alert_definition table"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.275825842Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=213.926µs
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.278201246Z level=info msg="Executing migration" id="recreate alert_definition table"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.279758675Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=1.562419ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.282358336Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.283947854Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=1.587889ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.286116563Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.287370341Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.252658ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.289665922Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.289780856Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=116.244µs
Dec 11 09:17:56 compute-0 systemd-rc-local-generator[97163]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:17:56 compute-0 systemd-sysv-generator[97166]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.292096237Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.293676377Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.57305ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.296890897Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.298144275Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=1.254348ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.302210022Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.30345933Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.249868ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.305657879Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.306639899Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=981.61µs
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.31021009Z level=info msg="Executing migration" id="Add column paused in alert_definition"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.314658058Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=4.447028ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.317065992Z level=info msg="Executing migration" id="drop alert_definition table"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.318192568Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=1.126646ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.321835941Z level=info msg="Executing migration" id="delete alert_definition_version table"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.322099199Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=265.148µs
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.324932557Z level=info msg="Executing migration" id="recreate alert_definition_version table"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.326218127Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=1.28604ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.329797568Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.330737318Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=939.55µs
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.334399001Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.335605669Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.208088ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.339627963Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.339752957Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=126.784µs
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.341926135Z level=info msg="Executing migration" id="drop alert_definition_version table"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.343243615Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.31698ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.347796006Z level=info msg="Executing migration" id="create alert_instance table"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.349277173Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.483666ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.355082213Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.356229578Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.148915ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.36078286Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.361824792Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.043752ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.364051472Z level=info msg="Executing migration" id="add column current_state_end to alert_instance"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.368690665Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=4.636613ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.370849792Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.371852534Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=1.003352ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.374601019Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.375619221Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=1.019772ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.37847285Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.4045981Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=26.11973ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.416543501Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.439180064Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=22.628072ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.442089244Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.443492897Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=1.404263ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.445641345Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.446862442Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=1.219427ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.449017239Z level=info msg="Executing migration" id="add current_reason column related to current_state"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.454874161Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=5.854931ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.458194954Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.463686614Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=5.49084ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.465948314Z level=info msg="Executing migration" id="create alert_rule table"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.467177753Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=1.229729ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.469418712Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.470572349Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=1.152306ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.47545033Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.480562558Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=5.15727ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.484183201Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.485527162Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.344391ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.489997931Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.490176087Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=180.866µs
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.494925484Z level=info msg="Executing migration" id="add column for to alert_rule"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.499905049Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=4.976124ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.502447357Z level=info msg="Executing migration" id="add column annotations to alert_rule"
Dec 11 09:17:56 compute-0 systemd[1]: Starting Ceph haproxy.rgw.default.compute-0.paephv for 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060...
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.506888666Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=4.440909ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.509740504Z level=info msg="Executing migration" id="add column labels to alert_rule"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.514080559Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=4.337135ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.517082012Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.51798248Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=902.218µs
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.520513909Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.521587321Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.073132ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.524397349Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.529716985Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=5.309405ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.532386057Z level=info msg="Executing migration" id="add panel_id column to alert_rule"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.53764004Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=5.249463ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.540411826Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.542125439Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=1.713093ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.544115641Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.548925061Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=4.8078ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.55084947Z level=info msg="Executing migration" id="add is_paused column to alert_rule table"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.555398261Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=4.547981ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.557428295Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.557509547Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=82.852µs
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.559737086Z level=info msg="Executing migration" id="create alert_rule_version table"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.560734447Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=997.281µs
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.563002518Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.564155573Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.151815ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.566267778Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.567389134Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.121995ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.569475598Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.56953576Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=62.802µs
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.571034357Z level=info msg="Executing migration" id="add column for to alert_rule_version"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.576761645Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=5.719658ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.578702835Z level=info msg="Executing migration" id="add column annotations to alert_rule_version"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.583764292Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=5.058587ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.58562266Z level=info msg="Executing migration" id="add column labels to alert_rule_version"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.590124419Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=4.500159ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.59177675Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.596164077Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=4.386157ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.59786885Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.602293788Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=4.425188ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.603884197Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.603934588Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=51.372µs
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.605330282Z level=info msg="Executing migration" id=create_alert_configuration_table
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.60594218Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=627.099µs
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.607237951Z level=info msg="Executing migration" id="Add column default in alert_configuration"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.611650477Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=4.412066ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.612906547Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.612956648Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=48.841µs
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.614433034Z level=info msg="Executing migration" id="add column org_id in alert_configuration"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.618895323Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=4.461649ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.620488402Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.621250416Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=761.744µs
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.623032681Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.627602393Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=4.569512ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.629066848Z level=info msg="Executing migration" id=create_ngalert_configuration_table
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.629728529Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=661.491µs
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.631800103Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.632585048Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=784.785µs
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.634377553Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.639350897Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=4.971804ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.641192895Z level=info msg="Executing migration" id="create provenance_type table"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.641935767Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=742.892µs
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.643474776Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.644292181Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=816.855µs
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.646072506Z level=info msg="Executing migration" id="create alert_image table"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.646853651Z level=info msg="Migration successfully executed" id="create alert_image table" duration=782.075µs
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.648607015Z level=info msg="Executing migration" id="add unique index on token to alert_image table"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.649463532Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=854.887µs
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.651069461Z level=info msg="Executing migration" id="support longer URLs in alert_image table"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.651137343Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=69.132µs
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.653061993Z level=info msg="Executing migration" id=create_alert_configuration_history_table
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.654001642Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=944.37µs
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.655582172Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.65682458Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=1.241948ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.66003765Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.660505164Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.662991211Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.665284893Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=2.290671ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.668595045Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.670147343Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.552808ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.672182757Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.677002356Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=4.816659ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.67904544Z level=info msg="Executing migration" id="create library_element table v1"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.680206526Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=1.160577ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.682268Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.683253361Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=988.391µs
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.685235362Z level=info msg="Executing migration" id="create library_element_connection table v1"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.686201902Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=965.61µs
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.688283217Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.689629098Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.346511ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.691811866Z level=info msg="Executing migration" id="add unique index library_element org_id_uid"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.692784316Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=972.09µs
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.694461059Z level=info msg="Executing migration" id="increase max description length to 2048"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.694482829Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=23.39µs
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.696111779Z level=info msg="Executing migration" id="alter library_element model to mediumtext"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.696169261Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=58.762µs
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.697819322Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.698144032Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=322.58µs
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.699882647Z level=info msg="Executing migration" id="create data_keys table"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.700879737Z level=info msg="Migration successfully executed" id="create data_keys table" duration=995.38µs
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.702707375Z level=info msg="Executing migration" id="create secrets table"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.703493068Z level=info msg="Migration successfully executed" id="create secrets table" duration=785.214µs
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.705039927Z level=info msg="Executing migration" id="rename data_keys name column to id"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:56 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef70004440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.741350804Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=36.295316ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.745143912Z level=info msg="Executing migration" id="add name column into data_keys"
Dec 11 09:17:56 compute-0 podman[97220]: 2025-12-11 09:17:56.751271171 +0000 UTC m=+0.051646373 container create 999fbd59fca26bc18b2c329934bebce6af952f00b0d1c8bdd935675cc0cd9431 (image=quay.io/ceph/haproxy:2.3, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-haproxy-rgw-default-compute-0-paephv)
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.752841481Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=7.692188ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.755009188Z level=info msg="Executing migration" id="copy data_keys id column values into name"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.755299537Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=292.359µs
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.759011742Z level=info msg="Executing migration" id="rename data_keys name column to label"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.788367113Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=29.352071ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.790662134Z level=info msg="Executing migration" id="rename data_keys id column back to name"
Dec 11 09:17:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb385e0afda0d384a7c45d1048c5d0bdd502c4bafc45e82108a13930d7001522/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Dec 11 09:17:56 compute-0 podman[97220]: 2025-12-11 09:17:56.813984989 +0000 UTC m=+0.114360221 container init 999fbd59fca26bc18b2c329934bebce6af952f00b0d1c8bdd935675cc0cd9431 (image=quay.io/ceph/haproxy:2.3, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-haproxy-rgw-default-compute-0-paephv)
Dec 11 09:17:56 compute-0 podman[97220]: 2025-12-11 09:17:56.820587994 +0000 UTC m=+0.120963196 container start 999fbd59fca26bc18b2c329934bebce6af952f00b0d1c8bdd935675cc0cd9431 (image=quay.io/ceph/haproxy:2.3, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-haproxy-rgw-default-compute-0-paephv)
Dec 11 09:17:56 compute-0 podman[97220]: 2025-12-11 09:17:56.728697421 +0000 UTC m=+0.029072643 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.822428401Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=31.762926ms
Dec 11 09:17:56 compute-0 bash[97220]: 999fbd59fca26bc18b2c329934bebce6af952f00b0d1c8bdd935675cc0cd9431
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.825432604Z level=info msg="Executing migration" id="create kv_store table v1"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.826522317Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=1.090403ms
Dec 11 09:17:56 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 12.19 scrub starts
Dec 11 09:17:56 compute-0 systemd[1]: Started Ceph haproxy.rgw.default.compute-0.paephv for 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060.
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.907686152Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key"
Dec 11 09:17:56 compute-0 ceph-mgr[74715]: [balancer INFO root] Optimize plan auto_2025-12-11_09:17:56
Dec 11 09:17:56 compute-0 ceph-mgr[74715]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 11 09:17:56 compute-0 ceph-mgr[74715]: [balancer INFO root] Some PGs (0.005666) are unknown; try again later
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.910134418Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=2.451075ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.912214062Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.912541662Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=328.57µs
Dec 11 09:17:56 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 12.19 scrub ok
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.915416401Z level=info msg="Executing migration" id="create permission table"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.916497994Z level=info msg="Migration successfully executed" id="create permission table" duration=1.082673ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-haproxy-rgw-default-compute-0-paephv[97235]: [NOTICE] 344/091756 (2) : New worker #1 (4) forked
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.920106936Z level=info msg="Executing migration" id="add unique index permission.role_id"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.921420557Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=1.315071ms
Dec 11 09:17:56 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v117: 353 pgs: 2 unknown, 2 active+remapped, 349 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 63 B/s, 1 objects/s recovering
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.92376518Z level=info msg="Executing migration" id="add unique index role_id_action_scope"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.92509189Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.32932ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.927460654Z level=info msg="Executing migration" id="create role table"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.928580099Z level=info msg="Migration successfully executed" id="create role table" duration=1.119665ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.930747086Z level=info msg="Executing migration" id="add column display_name"
Dec 11 09:17:56 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.937672901Z level=info msg="Migration successfully executed" id="add column display_name" duration=6.896424ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.940702385Z level=info msg="Executing migration" id="add column group_name"
Dec 11 09:17:56 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.006000188s ======
Dec 11 09:17:56 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:17:56.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.006000188s
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.94797253Z level=info msg="Migration successfully executed" id="add column group_name" duration=7.262844ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.950770107Z level=info msg="Executing migration" id="add index role.org_id"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.951745347Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=975.591µs
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.953555463Z level=info msg="Executing migration" id="add unique index role_org_id_name"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.95442269Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=867.097µs
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.956138703Z level=info msg="Executing migration" id="add index role_org_id_uid"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.957027751Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=888.318µs
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.958652231Z level=info msg="Executing migration" id="create team role table"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.959430315Z level=info msg="Migration successfully executed" id="create team role table" duration=777.684µs
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.96121668Z level=info msg="Executing migration" id="add index team_role.org_id"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.962065977Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=848.987µs
Dec 11 09:17:56 compute-0 sudo[96998]: pam_unix(sudo:session): session closed for user root
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.963966065Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.964902535Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=936.02µs
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.966719871Z level=info msg="Executing migration" id="add index team_role.team_id"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.967566967Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=846.896µs
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.969412904Z level=info msg="Executing migration" id="create user role table"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.97024517Z level=info msg="Migration successfully executed" id="create user role table" duration=832.086µs
Dec 11 09:17:56 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.97219374Z level=info msg="Executing migration" id="add index user_role.org_id"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.97314249Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=949.49µs
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.977390522Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.97830231Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=911.828µs
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.98088096Z level=info msg="Executing migration" id="add index user_role.user_id"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.981903162Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.022623ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.984527953Z level=info msg="Executing migration" id="create builtin role table"
Dec 11 09:17:56 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.985742361Z level=info msg="Migration successfully executed" id="create builtin role table" duration=1.213748ms
Dec 11 09:17:56 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.988839177Z level=info msg="Executing migration" id="add index builtin_role.role_id"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.990024023Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.185406ms
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.992460709Z level=info msg="Executing migration" id="add index builtin_role.name"
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.993385688Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=922.859µs
Dec 11 09:17:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:56.995583876Z level=info msg="Executing migration" id="Add column org_id to builtin_role table"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.00213998Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=6.548463ms
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.004390809Z level=info msg="Executing migration" id="add index builtin_role.org_id"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.005421372Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.031663ms
Dec 11 09:17:57 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:57 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.008424985Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.00958361Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.157675ms
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.014274156Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.015653469Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.382333ms
Dec 11 09:17:57 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:57 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-2.amjwbo on compute-2
Dec 11 09:17:57 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-2.amjwbo on compute-2
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.026267568Z level=info msg="Executing migration" id="add unique index role.uid"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.027578329Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.315171ms
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.030266753Z level=info msg="Executing migration" id="create seed assignment table"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.030935763Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=669.06µs
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.03275671Z level=info msg="Executing migration" id="add unique index builtin_role_role_name"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.033703629Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=944.309µs
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.038267111Z level=info msg="Executing migration" id="add column hidden to role table"
Dec 11 09:17:57 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] scanning for idle connections..
Dec 11 09:17:57 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] cleaning up connections: []
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.045367371Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=7.09579ms
Dec 11 09:17:57 compute-0 ceph-mgr[74715]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:57 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef54004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.049117488Z level=info msg="Executing migration" id="permission kind migration"
Dec 11 09:17:57 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 11 09:17:57 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] scanning for idle connections..
Dec 11 09:17:57 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] cleaning up connections: []
Dec 11 09:17:57 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 11 09:17:57 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 11 09:17:57 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.056224918Z level=info msg="Migration successfully executed" id="permission kind migration" duration=7.10389ms
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.058729217Z level=info msg="Executing migration" id="permission attribute migration"
Dec 11 09:17:57 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] scanning for idle connections..
Dec 11 09:17:57 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] cleaning up connections: []
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.06497873Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=6.245783ms
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.067115347Z level=info msg="Executing migration" id="permission identifier migration"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.073888737Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=6.769649ms
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.075794255Z level=info msg="Executing migration" id="add permission identifier index"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.076773296Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=978.511µs
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.079510311Z level=info msg="Executing migration" id="add permission action scope role_id index"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.080705659Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=1.195078ms
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.0846384Z level=info msg="Executing migration" id="remove permission role_id action scope index"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.085858808Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.224428ms
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.087711925Z level=info msg="Executing migration" id="create query_history table v1"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.088545501Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=833.626µs
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.091224043Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.092310177Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.086604ms
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.094125413Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.094241916Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=117.103µs
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.096246678Z level=info msg="Executing migration" id="rbac disabled migrator"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.096342771Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=97.943µs
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.098989562Z level=info msg="Executing migration" id="teams permissions migration"
Dec 11 09:17:57 compute-0 ceph-mgr[74715]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.099682304Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=689.512µs
Dec 11 09:17:57 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.102199892Z level=info msg="Executing migration" id="dashboard permissions"
Dec 11 09:17:57 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.103086889Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=888.928µs
Dec 11 09:17:57 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 11 09:17:57 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.10572116Z level=info msg="Executing migration" id="dashboard permissions uid scopes"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.106452943Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=732.433µs
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.109105875Z level=info msg="Executing migration" id="drop managed folder create actions"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.109455705Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=350.29µs
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.113098787Z level=info msg="Executing migration" id="alerting notification permissions"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.113914753Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=820.596µs
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.116624106Z level=info msg="Executing migration" id="create query_history_star table v1"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.117537055Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=913.489µs
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.120432214Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.12159876Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.163975ms
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.123734915Z level=info msg="Executing migration" id="add column org_id in query_history_star"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.129925516Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=6.191561ms
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.132141404Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.132235927Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=96.263µs
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.134127246Z level=info msg="Executing migration" id="create correlation table v1"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.13522805Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=1.099974ms
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.13714879Z level=info msg="Executing migration" id="add index correlations.uid"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.13814246Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=994.21µs
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.140188302Z level=info msg="Executing migration" id="add index correlations.source_uid"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.141129312Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=940.97µs
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.143411472Z level=info msg="Executing migration" id="add correlation config column"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.149730387Z level=info msg="Migration successfully executed" id="add correlation config column" duration=6.312065ms
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.151844132Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.152817362Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=973.32µs
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.154643568Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.155766863Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.123005ms
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.157555018Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.179700601Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=22.138492ms
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.181776585Z level=info msg="Executing migration" id="create correlation v2"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.183005243Z level=info msg="Migration successfully executed" id="create correlation v2" duration=1.227848ms
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.185015864Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.186082548Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.066894ms
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.187648987Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.188809742Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.159865ms
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.190428922Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.191273978Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=844.215µs
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.194035913Z level=info msg="Executing migration" id="copy correlation v1 to v2"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.19425525Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=220.107µs
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.19685512Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.197766058Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=910.718µs
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.200709909Z level=info msg="Executing migration" id="add provisioning column"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.206952462Z level=info msg="Migration successfully executed" id="add provisioning column" duration=6.242323ms
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.210020346Z level=info msg="Executing migration" id="create entity_events table"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.210903513Z level=info msg="Migration successfully executed" id="create entity_events table" duration=883.267µs
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.213118251Z level=info msg="Executing migration" id="create dashboard public config v1"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.214128252Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=1.010851ms
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.216556947Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.216911948Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.219129146Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.219500228Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.222837721Z level=info msg="Executing migration" id="Drop old dashboard public config table"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.223811271Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=974.78µs
Dec 11 09:17:57 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.226378501Z level=info msg="Executing migration" id="recreate dashboard public config v1"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.227413072Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.031962ms
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.233727707Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.234688567Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=961.36µs
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.237460002Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.238432681Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=974.249µs
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.24163133Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.24259483Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=964.74µs
Dec 11 09:17:57 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.246071458Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.247009866Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=938.618µs
Dec 11 09:17:57 compute-0 ceph-mon[74426]: 12.1c scrub starts
Dec 11 09:17:57 compute-0 ceph-mon[74426]: 12.1c scrub ok
Dec 11 09:17:57 compute-0 ceph-mon[74426]: 9.3 scrub starts
Dec 11 09:17:57 compute-0 ceph-mon[74426]: 9.3 scrub ok
Dec 11 09:17:57 compute-0 ceph-mon[74426]: 11.7 scrub starts
Dec 11 09:17:57 compute-0 ceph-mon[74426]: 11.7 scrub ok
Dec 11 09:17:57 compute-0 ceph-mon[74426]: osdmap e97: 3 total, 3 up, 3 in
Dec 11 09:17:57 compute-0 ceph-mon[74426]: pgmap v117: 353 pgs: 2 unknown, 2 active+remapped, 349 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 63 B/s, 1 objects/s recovering
Dec 11 09:17:57 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:57 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:57 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:57 compute-0 ceph-mon[74426]: Deploying daemon haproxy.rgw.default.compute-2.amjwbo on compute-2
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.248770371Z level=info msg="Executing migration" id="Drop public config table"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.249684638Z level=info msg="Migration successfully executed" id="Drop public config table" duration=914.237µs
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.252570247Z level=info msg="Executing migration" id="Recreate dashboard public config v2"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.253721253Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.150786ms
Dec 11 09:17:57 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.258867462Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.260286186Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.420314ms
Dec 11 09:17:57 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 98 pg[10.1d( v 56'1088 (0'0,56'1088] local-lis/les=78/79 n=5 ec=61/50 lis/c=78/78 les/c/f=79/79/0 sis=98) [0]/[1] r=0 lpr=98 pi=[78,98)/1 crt=56'1088 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:57 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 98 pg[10.d( v 56'1088 (0'0,56'1088] local-lis/les=78/79 n=8 ec=61/50 lis/c=78/78 les/c/f=79/79/0 sis=98) [0]/[1] r=0 lpr=98 pi=[78,98)/1 crt=56'1088 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:57 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 98 pg[10.1d( v 56'1088 (0'0,56'1088] local-lis/les=78/79 n=5 ec=61/50 lis/c=78/78 les/c/f=79/79/0 sis=98) [0]/[1] r=0 lpr=98 pi=[78,98)/1 crt=56'1088 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:57 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 98 pg[10.d( v 56'1088 (0'0,56'1088] local-lis/les=78/79 n=8 ec=61/50 lis/c=78/78 les/c/f=79/79/0 sis=98) [0]/[1] r=0 lpr=98 pi=[78,98)/1 crt=56'1088 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.263144274Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.265641841Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=2.501287ms
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.269297453Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.27048998Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.193687ms
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.272348477Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.296361318Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=24.002871ms
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.298414221Z level=info msg="Executing migration" id="add annotations_enabled column"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.305713606Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=7.298565ms
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.307858893Z level=info msg="Executing migration" id="add time_selection_enabled column"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.314711043Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=6.84731ms
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.317686275Z level=info msg="Executing migration" id="delete orphaned public dashboards"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.318075747Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=394.262µs
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.321156512Z level=info msg="Executing migration" id="add share column"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.328596731Z level=info msg="Migration successfully executed" id="add share column" duration=7.436379ms
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.331609604Z level=info msg="Executing migration" id="backfill empty share column fields with default of public"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.331842922Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=232.828µs
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.333851873Z level=info msg="Executing migration" id="create file table"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.334725911Z level=info msg="Migration successfully executed" id="create file table" duration=873.908µs
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.336574757Z level=info msg="Executing migration" id="file table idx: path natural pk"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.337541618Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=967.68µs
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.339357983Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.340297052Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=936.609µs
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.342104258Z level=info msg="Executing migration" id="create file_meta table"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.342781569Z level=info msg="Migration successfully executed" id="create file_meta table" duration=677.261µs
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.345820093Z level=info msg="Executing migration" id="file table idx: path key"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.347269537Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.452354ms
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.350719163Z level=info msg="Executing migration" id="set path collation in file table"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.350828877Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=115.784µs
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.353125638Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.35318938Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=64.842µs
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.355142911Z level=info msg="Executing migration" id="managed permissions migration"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.355889003Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=743.433µs
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.35804948Z level=info msg="Executing migration" id="managed folder permissions alert actions migration"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.358260627Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=211.377µs
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.360072672Z level=info msg="Executing migration" id="RBAC action name migrator"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.361496206Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.423013ms
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.364122247Z level=info msg="Executing migration" id="Add UID column to playlist"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.372483664Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=8.353678ms
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.374762595Z level=info msg="Executing migration" id="Update uid column values in playlist"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.37490529Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=145.815µs
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.376685224Z level=info msg="Executing migration" id="Add index for uid in playlist"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.377710506Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.024512ms
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:57 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef60003490 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.37978405Z level=info msg="Executing migration" id="update group index for alert rules"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.380156822Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=373.002µs
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.381949657Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.382166563Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=216.747µs
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.383852625Z level=info msg="Executing migration" id="admin only folder/dashboard permission"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.384396852Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=543.467µs
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.386570769Z level=info msg="Executing migration" id="add action column to seed_assignment"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.393108101Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=6.537212ms
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.394983038Z level=info msg="Executing migration" id="add scope column to seed_assignment"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.40120657Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=6.222532ms
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.403104949Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.404057948Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=953.389µs
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.406174953Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.484246011Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=78.042497ms
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.486439288Z level=info msg="Executing migration" id="add unique index builtin_role_name back"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.487430268Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=990.29µs
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.489157232Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.490184163Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.026571ms
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.492386142Z level=info msg="Executing migration" id="add primary key to seed_assigment"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.514042669Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=21.652087ms
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.516268398Z level=info msg="Executing migration" id="add origin column to seed_assignment"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.524239453Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=7.958625ms
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.526801952Z level=info msg="Executing migration" id="add origin to plugin seed_assignment"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.527088191Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=287.109µs
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.530523697Z level=info msg="Executing migration" id="prevent seeding OnCall access"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.530770465Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=250.038µs
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.532926501Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.533190159Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=266.598µs
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.535046736Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.535212792Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=166.456µs
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.537186463Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.537386829Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=201.276µs
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.539945148Z level=info msg="Executing migration" id="create folder table"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.541033721Z level=info msg="Migration successfully executed" id="create folder table" duration=1.089813ms
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.543729984Z level=info msg="Executing migration" id="Add index for parent_uid"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.545004674Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.296931ms
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.547978036Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.549265075Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.288849ms
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.55135744Z level=info msg="Executing migration" id="Update folder title length"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.551419682Z level=info msg="Migration successfully executed" id="Update folder title length" duration=63.232µs
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.553010581Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.554163217Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=1.152316ms
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.556380575Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.558961444Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=2.580759ms
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.562141442Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.563413351Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.271789ms
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.565490586Z level=info msg="Executing migration" id="Sync dashboard and folder table"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.565976Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=486.304µs
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.567799037Z level=info msg="Executing migration" id="Remove ghost folders from the folder table"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.568073975Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=275.328µs
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.57019227Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.57148242Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=1.28833ms
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.573514703Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.574580565Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=1.065312ms
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.576947198Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.578131325Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=1.186437ms
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.580199079Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.581410926Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=1.212348ms
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.584287185Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.585400489Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=1.115254ms
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.588251977Z level=info msg="Executing migration" id="create anon_device table"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.589171305Z level=info msg="Migration successfully executed" id="create anon_device table" duration=920.258µs
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.591244769Z level=info msg="Executing migration" id="add unique index anon_device.device_id"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.592407105Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.161886ms
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.594632784Z level=info msg="Executing migration" id="add index anon_device.updated_at"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.595765759Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.133945ms
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.597953066Z level=info msg="Executing migration" id="create signing_key table"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.598917256Z level=info msg="Migration successfully executed" id="create signing_key table" duration=964.35µs
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.601029501Z level=info msg="Executing migration" id="add unique index signing_key.key_id"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.602080183Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.051792ms
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.604574041Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.605832139Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.259118ms
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.608843872Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.609151422Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=309.11µs
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.61136272Z level=info msg="Executing migration" id="Add folder_uid for dashboard"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.619288344Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=7.921124ms
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.621631376Z level=info msg="Executing migration" id="Populate dashboard folder_uid column"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.622352988Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=722.632µs
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.624185715Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.625283699Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=1.095754ms
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.628173018Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.62921414Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=1.038882ms
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.631214212Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.632593555Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=1.379343ms
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.635857014Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.637072852Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.216388ms
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.639629621Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.640826658Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=1.196617ms
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.642647934Z level=info msg="Executing migration" id="create sso_setting table"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.643598684Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=951.26µs
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.646227185Z level=info msg="Executing migration" id="copy kvstore migration status to each org"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.646973647Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=749.312µs
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.64930021Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.649587919Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=288.579µs
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.651722174Z level=info msg="Executing migration" id="alter kv_store.value to longtext"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.651807397Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=86.723µs
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.653845529Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.660710182Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=6.860592ms
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.662770175Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.669473401Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=6.702686ms
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.671485694Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.671805274Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=320.39µs
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=migrator t=2025-12-11T09:17:57.673396192Z level=info msg="migrations completed" performed=547 skipped=0 duration=2.597778397s
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=sqlstore t=2025-12-11T09:17:57.674564819Z level=info msg="Created default organization"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=secrets t=2025-12-11T09:17:57.676985073Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=plugin.store t=2025-12-11T09:17:57.697518726Z level=info msg="Loading plugins..."
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=local.finder t=2025-12-11T09:17:57.790818562Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=plugin.store t=2025-12-11T09:17:57.790856524Z level=info msg="Plugins loaded" count=55 duration=93.338399ms
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=query_data t=2025-12-11T09:17:57.794540608Z level=info msg="Query Service initialization"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=live.push_http t=2025-12-11T09:17:57.803671588Z level=info msg="Live Push Gateway initialization"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=ngalert.migration t=2025-12-11T09:17:57.807245149Z level=info msg=Starting
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=ngalert.migration t=2025-12-11T09:17:57.80793794Z level=info msg="Applying transition" currentType=Legacy desiredType=UnifiedAlerting cleanOnDowngrade=false cleanOnUpgrade=false
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=ngalert.migration orgID=1 t=2025-12-11T09:17:57.808507468Z level=info msg="Migrating alerts for organisation"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=ngalert.migration orgID=1 t=2025-12-11T09:17:57.809746416Z level=info msg="Alerts found to migrate" alerts=0
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=ngalert.migration t=2025-12-11T09:17:57.812092428Z level=info msg="Completed alerting migration"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=ngalert.state.manager t=2025-12-11T09:17:57.83163309Z level=info msg="Running in alternative execution of Error/NoData mode"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=infra.usagestats.collector t=2025-12-11T09:17:57.834099137Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=provisioning.datasources t=2025-12-11T09:17:57.835626114Z level=info msg="inserting datasource from configuration" name=Loki uid=P8E80F9AEF21F6940
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=provisioning.alerting t=2025-12-11T09:17:57.846478918Z level=info msg="starting to provision alerting"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=provisioning.alerting t=2025-12-11T09:17:57.846509559Z level=info msg="finished to provision alerting"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=grafanaStorageLogger t=2025-12-11T09:17:57.846800888Z level=info msg="Storage starting"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=ngalert.state.manager t=2025-12-11T09:17:57.847742038Z level=info msg="Warming state cache for startup"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=ngalert.multiorg.alertmanager t=2025-12-11T09:17:57.849152071Z level=info msg="Starting MultiOrg Alertmanager"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=http.server t=2025-12-11T09:17:57.853812015Z level=info msg="HTTP Server TLS settings" MinTLSVersion=TLS1.2 configuredciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=http.server t=2025-12-11T09:17:57.854899648Z level=info msg="HTTP Server Listen" address=192.168.122.100:3000 protocol=https subUrl= socket=
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=ngalert.state.manager t=2025-12-11T09:17:57.891879528Z level=info msg="State cache has been initialized" states=0 duration=44.13547ms
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=ngalert.scheduler t=2025-12-11T09:17:57.89192391Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=ticker t=2025-12-11T09:17:57.892028243Z level=info msg=starting first_tick=2025-12-11T09:18:00Z
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=provisioning.dashboard t=2025-12-11T09:17:57.893839648Z level=info msg="starting to provision dashboards"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=sqlstore.transactions t=2025-12-11T09:17:57.897612875Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=sqlstore.transactions t=2025-12-11T09:17:57.908515081Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 code="database is locked"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=sqlstore.transactions t=2025-12-11T09:17:57.919548601Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=2 code="database is locked"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=sqlstore.transactions t=2025-12-11T09:17:57.930433207Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=3 code="database is locked"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=plugins.update.checker t=2025-12-11T09:17:57.935374679Z level=info msg="Update check succeeded" duration=88.512259ms
Dec 11 09:17:57 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=grafana.update.checker t=2025-12-11T09:17:57.936584417Z level=info msg="Update check succeeded" duration=89.300444ms
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=sqlstore.transactions t=2025-12-11T09:17:57.941078825Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=4 code="database is locked"
Dec 11 09:17:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=sqlstore.transactions t=2025-12-11T09:17:57.952296971Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Dec 11 09:17:57 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Dec 11 09:17:58 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=grafana-apiserver t=2025-12-11T09:17:58.014592212Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager"
Dec 11 09:17:58 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=grafana-apiserver t=2025-12-11T09:17:58.018166852Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager"
Dec 11 09:17:58 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Dec 11 09:17:58 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=provisioning.dashboard t=2025-12-11T09:17:58.360570099Z level=info msg="finished to provision dashboards"
Dec 11 09:17:58 compute-0 ceph-mon[74426]: 12.19 scrub starts
Dec 11 09:17:58 compute-0 ceph-mon[74426]: 12.19 scrub ok
Dec 11 09:17:58 compute-0 ceph-mon[74426]: 8.2 scrub starts
Dec 11 09:17:58 compute-0 ceph-mon[74426]: 8.2 scrub ok
Dec 11 09:17:58 compute-0 ceph-mon[74426]: 8.1b scrub starts
Dec 11 09:17:58 compute-0 ceph-mon[74426]: 8.1b scrub ok
Dec 11 09:17:58 compute-0 ceph-mon[74426]: osdmap e98: 3 total, 3 up, 3 in
Dec 11 09:17:58 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Dec 11 09:17:58 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Dec 11 09:17:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 99 pg[10.1d( v 56'1088 (0'0,56'1088] local-lis/les=98/99 n=5 ec=61/50 lis/c=78/78 les/c/f=79/79/0 sis=98) [0]/[1] async=[0] r=0 lpr=98 pi=[78,98)/1 crt=56'1088 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:58 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 99 pg[10.d( v 56'1088 (0'0,56'1088] local-lis/les=98/99 n=8 ec=61/50 lis/c=78/78 les/c/f=79/79/0 sis=98) [0]/[1] async=[0] r=0 lpr=98 pi=[78,98)/1 crt=56'1088 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:58 compute-0 ceph-mgr[74715]: [progress INFO root] Writing back 29 completed events
Dec 11 09:17:58 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 11 09:17:58 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:58 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:58 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef500016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:58 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:17:58 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:17:58 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:17:58.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:17:58 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 11 09:17:58 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:58 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 11 09:17:58 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:58 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Dec 11 09:17:58 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Dec 11 09:17:58 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v120: 353 pgs: 1 active+clean+scrubbing, 2 remapped+peering, 350 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 5 objects/s recovering
Dec 11 09:17:58 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:58 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/keepalived_password}] v 0)
Dec 11 09:17:58 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:58 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:17:58 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000031s ======
Dec 11 09:17:58 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:17:58.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 11 09:17:58 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 11 09:17:58 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 11 09:17:58 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 11 09:17:58 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 11 09:17:58 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-2.ippkne on compute-2
Dec 11 09:17:58 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-2.ippkne on compute-2
Dec 11 09:17:58 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Dec 11 09:17:59 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:59 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef500016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:59 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:17:59 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef54004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:59 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Dec 11 09:17:59 compute-0 ceph-mon[74426]: 10.12 scrub starts
Dec 11 09:17:59 compute-0 ceph-mon[74426]: 10.12 scrub ok
Dec 11 09:17:59 compute-0 ceph-mon[74426]: 11.17 scrub starts
Dec 11 09:17:59 compute-0 ceph-mon[74426]: 11.17 scrub ok
Dec 11 09:17:59 compute-0 ceph-mon[74426]: 9.6 scrub starts
Dec 11 09:17:59 compute-0 ceph-mon[74426]: 9.6 scrub ok
Dec 11 09:17:59 compute-0 ceph-mon[74426]: osdmap e99: 3 total, 3 up, 3 in
Dec 11 09:17:59 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:59 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:59 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:59 compute-0 ceph-mon[74426]: 10.2 scrub starts
Dec 11 09:17:59 compute-0 ceph-mon[74426]: pgmap v120: 353 pgs: 1 active+clean+scrubbing, 2 remapped+peering, 350 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 5 objects/s recovering
Dec 11 09:17:59 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:59 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:59 compute-0 ceph-mon[74426]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 11 09:17:59 compute-0 ceph-mon[74426]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 11 09:17:59 compute-0 ceph-mon[74426]: Deploying daemon keepalived.rgw.default.compute-2.ippkne on compute-2
Dec 11 09:17:59 compute-0 ceph-mon[74426]: 10.2 scrub ok
Dec 11 09:17:59 compute-0 ceph-mon[74426]: 8.c scrub starts
Dec 11 09:17:59 compute-0 ceph-mon[74426]: 8.c scrub ok
Dec 11 09:17:59 compute-0 ceph-mon[74426]: 8.10 scrub starts
Dec 11 09:17:59 compute-0 ceph-mon[74426]: 8.10 scrub ok
Dec 11 09:17:59 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Dec 11 09:17:59 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Dec 11 09:17:59 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 100 pg[10.1d( v 56'1088 (0'0,56'1088] local-lis/les=98/99 n=5 ec=61/50 lis/c=98/78 les/c/f=99/79/0 sis=100 pruub=14.887444496s) [0] async=[0] r=-1 lpr=100 pi=[78,100)/1 crt=56'1088 mlcod 56'1088 active pruub 235.569686890s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:59 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 100 pg[10.1d( v 56'1088 (0'0,56'1088] local-lis/les=98/99 n=5 ec=61/50 lis/c=98/78 les/c/f=99/79/0 sis=100 pruub=14.887337685s) [0] r=-1 lpr=100 pi=[78,100)/1 crt=56'1088 mlcod 0'0 unknown NOTIFY pruub 235.569686890s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:59 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 100 pg[10.d( v 56'1088 (0'0,56'1088] local-lis/les=98/99 n=8 ec=61/50 lis/c=98/78 les/c/f=99/79/0 sis=100 pruub=14.890527725s) [0] async=[0] r=-1 lpr=100 pi=[78,100)/1 crt=56'1088 mlcod 56'1088 active pruub 235.572921753s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:59 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 100 pg[10.d( v 56'1088 (0'0,56'1088] local-lis/les=98/99 n=8 ec=61/50 lis/c=98/78 les/c/f=99/79/0 sis=100 pruub=14.890480995s) [0] r=-1 lpr=100 pi=[78,100)/1 crt=56'1088 mlcod 0'0 unknown NOTIFY pruub 235.572921753s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:59 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 10.5 deep-scrub starts
Dec 11 09:17:59 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 10.5 deep-scrub ok
Dec 11 09:18:00 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Dec 11 09:18:00 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Dec 11 09:18:00 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Dec 11 09:18:00 compute-0 ceph-mon[74426]: osdmap e100: 3 total, 3 up, 3 in
Dec 11 09:18:00 compute-0 ceph-mon[74426]: 10.5 deep-scrub starts
Dec 11 09:18:00 compute-0 ceph-mon[74426]: 10.5 deep-scrub ok
Dec 11 09:18:00 compute-0 ceph-mon[74426]: 9.8 scrub starts
Dec 11 09:18:00 compute-0 ceph-mon[74426]: 9.8 scrub ok
Dec 11 09:18:00 compute-0 ceph-mon[74426]: 11.14 scrub starts
Dec 11 09:18:00 compute-0 ceph-mon[74426]: 11.14 scrub ok
Dec 11 09:18:00 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:00 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef60003490 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:00 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 11 09:18:00 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:00 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 11 09:18:00 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:00 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Dec 11 09:18:00 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:00 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 11 09:18:00 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 11 09:18:00 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 11 09:18:00 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 11 09:18:00 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-0.gxvbmc on compute-0
Dec 11 09:18:00 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-0.gxvbmc on compute-0
Dec 11 09:18:00 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:18:00 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:00 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:18:00.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:00 compute-0 sudo[97256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:18:00 compute-0 sudo[97256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:00 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Dec 11 09:18:00 compute-0 sudo[97256]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:00 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v123: 353 pgs: 1 active+clean+scrubbing, 2 remapped+peering, 350 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 5 objects/s recovering
Dec 11 09:18:00 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:18:00 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000031s ======
Dec 11 09:18:00 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:18:00.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 11 09:18:00 compute-0 sudo[97281]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/keepalived:2.2.4 --timeout 895 _orch deploy --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:18:00 compute-0 sudo[97281]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:01 compute-0 ceph-osd[82859]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Dec 11 09:18:01 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:01 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef500016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:01 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e101 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:18:01 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:01 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef70004440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:01 compute-0 podman[97348]: 2025-12-11 09:18:01.394500267 +0000 UTC m=+0.047945150 container create 475ebfbbf54bdda534b9480f69c558b44e4b9535bf8efac6cce831108a41b315 (image=quay.io/ceph/keepalived:2.2.4, name=happy_allen, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, name=keepalived, com.redhat.component=keepalived-container, release=1793, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph.)
Dec 11 09:18:01 compute-0 systemd[90513]: Starting Mark boot as successful...
Dec 11 09:18:01 compute-0 systemd[90513]: Finished Mark boot as successful.
Dec 11 09:18:01 compute-0 systemd[1]: Started libpod-conmon-475ebfbbf54bdda534b9480f69c558b44e4b9535bf8efac6cce831108a41b315.scope.
Dec 11 09:18:01 compute-0 podman[97348]: 2025-12-11 09:18:01.370377822 +0000 UTC m=+0.023822735 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Dec 11 09:18:01 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:18:01 compute-0 podman[97348]: 2025-12-11 09:18:01.483106488 +0000 UTC m=+0.136551391 container init 475ebfbbf54bdda534b9480f69c558b44e4b9535bf8efac6cce831108a41b315 (image=quay.io/ceph/keepalived:2.2.4, name=happy_allen, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, vcs-type=git, name=keepalived, description=keepalived for Ceph, io.openshift.expose-services=, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, release=1793, io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc.)
Dec 11 09:18:01 compute-0 podman[97348]: 2025-12-11 09:18:01.49190271 +0000 UTC m=+0.145347593 container start 475ebfbbf54bdda534b9480f69c558b44e4b9535bf8efac6cce831108a41b315 (image=quay.io/ceph/keepalived:2.2.4, name=happy_allen, io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, release=1793, io.openshift.expose-services=, build-date=2023-02-22T09:23:20, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived)
Dec 11 09:18:01 compute-0 podman[97348]: 2025-12-11 09:18:01.495641375 +0000 UTC m=+0.149086258 container attach 475ebfbbf54bdda534b9480f69c558b44e4b9535bf8efac6cce831108a41b315 (image=quay.io/ceph/keepalived:2.2.4, name=happy_allen, io.buildah.version=1.28.2, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, vcs-type=git, description=keepalived for Ceph, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived)
Dec 11 09:18:01 compute-0 happy_allen[97365]: 0 0
Dec 11 09:18:01 compute-0 systemd[1]: libpod-475ebfbbf54bdda534b9480f69c558b44e4b9535bf8efac6cce831108a41b315.scope: Deactivated successfully.
Dec 11 09:18:01 compute-0 podman[97348]: 2025-12-11 09:18:01.499434471 +0000 UTC m=+0.152879354 container died 475ebfbbf54bdda534b9480f69c558b44e4b9535bf8efac6cce831108a41b315 (image=quay.io/ceph/keepalived:2.2.4, name=happy_allen, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, com.redhat.component=keepalived-container, description=keepalived for Ceph, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, name=keepalived, vcs-type=git, distribution-scope=public)
Dec 11 09:18:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-afc3b7c87d92050747fa1cc7a008a1eff5eb0de3ea68088cb6ba9a29c694c65b-merged.mount: Deactivated successfully.
Dec 11 09:18:01 compute-0 podman[97348]: 2025-12-11 09:18:01.540429956 +0000 UTC m=+0.193874839 container remove 475ebfbbf54bdda534b9480f69c558b44e4b9535bf8efac6cce831108a41b315 (image=quay.io/ceph/keepalived:2.2.4, name=happy_allen, description=keepalived for Ceph, io.openshift.expose-services=, release=1793, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, version=2.2.4, io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec 11 09:18:01 compute-0 systemd[1]: libpod-conmon-475ebfbbf54bdda534b9480f69c558b44e4b9535bf8efac6cce831108a41b315.scope: Deactivated successfully.
Dec 11 09:18:01 compute-0 systemd[1]: Reloading.
Dec 11 09:18:01 compute-0 ceph-mon[74426]: osdmap e101: 3 total, 3 up, 3 in
Dec 11 09:18:01 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:01 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:01 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:01 compute-0 ceph-mon[74426]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 11 09:18:01 compute-0 ceph-mon[74426]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 11 09:18:01 compute-0 ceph-mon[74426]: Deploying daemon keepalived.rgw.default.compute-0.gxvbmc on compute-0
Dec 11 09:18:01 compute-0 ceph-mon[74426]: 10.18 scrub starts
Dec 11 09:18:01 compute-0 ceph-mon[74426]: pgmap v123: 353 pgs: 1 active+clean+scrubbing, 2 remapped+peering, 350 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 5 objects/s recovering
Dec 11 09:18:01 compute-0 ceph-mon[74426]: 10.18 scrub ok
Dec 11 09:18:01 compute-0 ceph-mon[74426]: 9.9 scrub starts
Dec 11 09:18:01 compute-0 ceph-mon[74426]: 9.9 scrub ok
Dec 11 09:18:01 compute-0 ceph-mon[74426]: 9.d scrub starts
Dec 11 09:18:01 compute-0 ceph-mon[74426]: 9.d scrub ok
Dec 11 09:18:01 compute-0 systemd-rc-local-generator[97409]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:18:01 compute-0 systemd-sysv-generator[97412]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:18:01 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] _maybe_adjust
Dec 11 09:18:01 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 11 09:18:01 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 11 09:18:01 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 11 09:18:01 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 11 09:18:01 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 11 09:18:01 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 11 09:18:01 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 11 09:18:01 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 11 09:18:01 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 11 09:18:01 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 11 09:18:01 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 11 09:18:01 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 11 09:18:01 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 11 09:18:01 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 11 09:18:01 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 11 09:18:01 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 11 09:18:01 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 11 09:18:01 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 11 09:18:01 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 11 09:18:01 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 11 09:18:01 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 11 09:18:01 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 11 09:18:01 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 11 09:18:01 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 32)
Dec 11 09:18:01 compute-0 systemd[1]: Reloading.
Dec 11 09:18:02 compute-0 systemd-rc-local-generator[97456]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:18:02 compute-0 systemd-sysv-generator[97460]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:18:02 compute-0 systemd[1]: Starting Ceph keepalived.rgw.default.compute-0.gxvbmc for 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060...
Dec 11 09:18:02 compute-0 podman[97514]: 2025-12-11 09:18:02.544365378 +0000 UTC m=+0.055523923 container create aa378ba665122d6b9080f32155dedc34f558e46767bb15aa60b6859245b6e2ec (image=quay.io/ceph/keepalived:2.2.4, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-keepalived-rgw-default-compute-0-gxvbmc, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, release=1793, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, architecture=x86_64, io.openshift.tags=Ceph keepalived, name=keepalived, io.openshift.expose-services=)
Dec 11 09:18:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a121f60557ddf63a55f2758e6db2bf5fd8005ccdfaec634877942c33234d818f/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:18:02 compute-0 podman[97514]: 2025-12-11 09:18:02.51780307 +0000 UTC m=+0.028961615 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Dec 11 09:18:02 compute-0 podman[97514]: 2025-12-11 09:18:02.61317389 +0000 UTC m=+0.124332445 container init aa378ba665122d6b9080f32155dedc34f558e46767bb15aa60b6859245b6e2ec (image=quay.io/ceph/keepalived:2.2.4, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-keepalived-rgw-default-compute-0-gxvbmc, io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, release=1793, architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.28.2)
Dec 11 09:18:02 compute-0 podman[97514]: 2025-12-11 09:18:02.621157786 +0000 UTC m=+0.132316311 container start aa378ba665122d6b9080f32155dedc34f558e46767bb15aa60b6859245b6e2ec (image=quay.io/ceph/keepalived:2.2.4, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-keepalived-rgw-default-compute-0-gxvbmc, io.openshift.tags=Ceph keepalived, name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, description=keepalived for Ceph, release=1793, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git)
Dec 11 09:18:02 compute-0 bash[97514]: aa378ba665122d6b9080f32155dedc34f558e46767bb15aa60b6859245b6e2ec
Dec 11 09:18:02 compute-0 systemd[1]: Started Ceph keepalived.rgw.default.compute-0.gxvbmc for 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060.
Dec 11 09:18:02 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-keepalived-rgw-default-compute-0-gxvbmc[97529]: Thu Dec 11 09:18:02 2025: Starting Keepalived v2.2.4 (08/21,2021)
Dec 11 09:18:02 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-keepalived-rgw-default-compute-0-gxvbmc[97529]: Thu Dec 11 09:18:02 2025: Running on Linux 5.14.0-648.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Dec 5 11:18:23 UTC 2025 (built for Linux 5.14.0)
Dec 11 09:18:02 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-keepalived-rgw-default-compute-0-gxvbmc[97529]: Thu Dec 11 09:18:02 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Dec 11 09:18:02 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-keepalived-rgw-default-compute-0-gxvbmc[97529]: Thu Dec 11 09:18:02 2025: Configuration file /etc/keepalived/keepalived.conf
Dec 11 09:18:02 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-keepalived-rgw-default-compute-0-gxvbmc[97529]: Thu Dec 11 09:18:02 2025: Failed to bind to process monitoring socket - errno 98 - Address already in use
Dec 11 09:18:02 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-keepalived-rgw-default-compute-0-gxvbmc[97529]: Thu Dec 11 09:18:02 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Dec 11 09:18:02 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-keepalived-rgw-default-compute-0-gxvbmc[97529]: Thu Dec 11 09:18:02 2025: Starting VRRP child process, pid=4
Dec 11 09:18:02 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-keepalived-rgw-default-compute-0-gxvbmc[97529]: Thu Dec 11 09:18:02 2025: Startup complete
Dec 11 09:18:02 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-keepalived-nfs-cephfs-compute-0-ewssxv[95785]: Thu Dec 11 09:18:02 2025: (VI_0) Entering BACKUP STATE
Dec 11 09:18:02 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-keepalived-rgw-default-compute-0-gxvbmc[97529]: Thu Dec 11 09:18:02 2025: (VI_0) Entering BACKUP STATE (init)
Dec 11 09:18:02 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-keepalived-rgw-default-compute-0-gxvbmc[97529]: Thu Dec 11 09:18:02 2025: VRRP_Script(check_backend) succeeded
Dec 11 09:18:02 compute-0 sudo[97281]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:02 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 11 09:18:02 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:02 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef70004440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:02 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:02 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 11 09:18:02 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:18:02 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:02 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:18:02.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:02 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:02 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Dec 11 09:18:02 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:02 compute-0 ceph-mgr[74715]: [progress INFO root] complete: finished ev a8591716-c9f2-4037-a32c-485a8bbc4e94 (Updating ingress.rgw.default deployment (+4 -> 4))
Dec 11 09:18:02 compute-0 ceph-mgr[74715]: [progress INFO root] Completed event a8591716-c9f2-4037-a32c-485a8bbc4e94 (Updating ingress.rgw.default deployment (+4 -> 4)) in 8 seconds
Dec 11 09:18:02 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Dec 11 09:18:02 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v124: 353 pgs: 1 active+clean+scrubbing, 2 remapped+peering, 350 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 38 B/s, 4 objects/s recovering
Dec 11 09:18:02 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:18:02 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:02 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:18:02.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:02 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:02 compute-0 ceph-mgr[74715]: [progress INFO root] update: starting ev 5540e681-1069-4997-8004-6d8a509f004b (Updating prometheus deployment (+1 -> 1))
Dec 11 09:18:03 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:03 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef60003490 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:03 compute-0 ceph-mon[74426]: 9.7 scrub starts
Dec 11 09:18:03 compute-0 ceph-mon[74426]: 9.7 scrub ok
Dec 11 09:18:03 compute-0 ceph-mon[74426]: 9.f scrub starts
Dec 11 09:18:03 compute-0 ceph-mon[74426]: 9.f scrub ok
Dec 11 09:18:03 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:03 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:03 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:03 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Deploying daemon prometheus.compute-0 on compute-0
Dec 11 09:18:03 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Deploying daemon prometheus.compute-0 on compute-0
Dec 11 09:18:03 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-keepalived-nfs-cephfs-compute-0-ewssxv[95785]: Thu Dec 11 09:18:03 2025: (VI_0) Entering MASTER STATE
Dec 11 09:18:03 compute-0 sudo[97538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:18:03 compute-0 sudo[97538]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:03 compute-0 sudo[97538]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:03 compute-0 sudo[97563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/prometheus:v2.51.0 --timeout 895 _orch deploy --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:18:03 compute-0 sudo[97563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:03 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:03 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef54004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:03 compute-0 ceph-mgr[74715]: [progress INFO root] Writing back 30 completed events
Dec 11 09:18:03 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 11 09:18:03 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:04 compute-0 ceph-mon[74426]: pgmap v124: 353 pgs: 1 active+clean+scrubbing, 2 remapped+peering, 350 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 38 B/s, 4 objects/s recovering
Dec 11 09:18:04 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:04 compute-0 ceph-mon[74426]: 11.e scrub starts
Dec 11 09:18:04 compute-0 ceph-mon[74426]: 11.e scrub ok
Dec 11 09:18:04 compute-0 ceph-mon[74426]: 10.e scrub starts
Dec 11 09:18:04 compute-0 ceph-mon[74426]: 10.e scrub ok
Dec 11 09:18:04 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:04 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-keepalived-rgw-default-compute-0-gxvbmc[97529]: Thu Dec 11 09:18:04 2025: (VI_0) received lower priority (90) advert from 192.168.122.102 - discarding
Dec 11 09:18:04 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-keepalived-nfs-cephfs-compute-0-ewssxv[95785]: Thu Dec 11 09:18:04 2025: (VI_0) received an invalid passwd!
Dec 11 09:18:04 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:04 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef500032f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:04 compute-0 sudo[97665]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cngnopbdlnfzqizuicpjdyhiyrgxkwgb ; /usr/bin/python3'
Dec 11 09:18:04 compute-0 sudo[97665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:18:04 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:18:04 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:04 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:18:04.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:04 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v125: 353 pgs: 353 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:18:04 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0)
Dec 11 09:18:04 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Dec 11 09:18:04 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:18:04 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:04 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:18:04.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:05 compute-0 python3[97667]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps --daemon-type rgw --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:18:05 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:05 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef70004440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:05 compute-0 podman[97669]: 2025-12-11 09:18:05.083029948 +0000 UTC m=+0.049133826 container create 287778e9d8042fc99036a2b5064e07bbe5a1eb0e6442597a368abab30e117bea (image=quay.io/ceph/ceph:v19, name=jovial_lamport, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec 11 09:18:05 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Dec 11 09:18:05 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Dec 11 09:18:05 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Dec 11 09:18:05 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Dec 11 09:18:05 compute-0 ceph-mon[74426]: Deploying daemon prometheus.compute-0 on compute-0
Dec 11 09:18:05 compute-0 ceph-mon[74426]: 11.3 scrub starts
Dec 11 09:18:05 compute-0 ceph-mon[74426]: 11.3 scrub ok
Dec 11 09:18:05 compute-0 ceph-mon[74426]: 10.a scrub starts
Dec 11 09:18:05 compute-0 ceph-mon[74426]: 10.a scrub ok
Dec 11 09:18:05 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Dec 11 09:18:05 compute-0 systemd[1]: Started libpod-conmon-287778e9d8042fc99036a2b5064e07bbe5a1eb0e6442597a368abab30e117bea.scope.
Dec 11 09:18:05 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:18:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/089b9c71e9afc8a098e6ab3919338a8e53cc709f2c640b82b5c8872ee3959fac/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:18:05 compute-0 podman[97669]: 2025-12-11 09:18:05.062396282 +0000 UTC m=+0.028500190 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:18:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/089b9c71e9afc8a098e6ab3919338a8e53cc709f2c640b82b5c8872ee3959fac/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:18:05 compute-0 podman[97669]: 2025-12-11 09:18:05.174799358 +0000 UTC m=+0.140903256 container init 287778e9d8042fc99036a2b5064e07bbe5a1eb0e6442597a368abab30e117bea (image=quay.io/ceph/ceph:v19, name=jovial_lamport, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:18:05 compute-0 podman[97669]: 2025-12-11 09:18:05.181964178 +0000 UTC m=+0.148068056 container start 287778e9d8042fc99036a2b5064e07bbe5a1eb0e6442597a368abab30e117bea (image=quay.io/ceph/ceph:v19, name=jovial_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Dec 11 09:18:05 compute-0 podman[97669]: 2025-12-11 09:18:05.185689424 +0000 UTC m=+0.151793312 container attach 287778e9d8042fc99036a2b5064e07bbe5a1eb0e6442597a368abab30e117bea (image=quay.io/ceph/ceph:v19, name=jovial_lamport, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 11 09:18:05 compute-0 jovial_lamport[97684]: ERROR: invalid flag --daemon-type
Dec 11 09:18:05 compute-0 systemd[1]: libpod-287778e9d8042fc99036a2b5064e07bbe5a1eb0e6442597a368abab30e117bea.scope: Deactivated successfully.
Dec 11 09:18:05 compute-0 podman[97669]: 2025-12-11 09:18:05.243347911 +0000 UTC m=+0.209451809 container died 287778e9d8042fc99036a2b5064e07bbe5a1eb0e6442597a368abab30e117bea (image=quay.io/ceph/ceph:v19, name=jovial_lamport, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:18:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-089b9c71e9afc8a098e6ab3919338a8e53cc709f2c640b82b5c8872ee3959fac-merged.mount: Deactivated successfully.
Dec 11 09:18:05 compute-0 podman[97669]: 2025-12-11 09:18:05.288469062 +0000 UTC m=+0.254572930 container remove 287778e9d8042fc99036a2b5064e07bbe5a1eb0e6442597a368abab30e117bea (image=quay.io/ceph/ceph:v19, name=jovial_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec 11 09:18:05 compute-0 sudo[97665]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:05 compute-0 systemd[1]: libpod-conmon-287778e9d8042fc99036a2b5064e07bbe5a1eb0e6442597a368abab30e117bea.scope: Deactivated successfully.
Dec 11 09:18:05 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:05 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef60003490 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:05 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-keepalived-rgw-default-compute-0-gxvbmc[97529]: Thu Dec 11 09:18:05 2025: (VI_0) received lower priority (90) advert from 192.168.122.102 - discarding
Dec 11 09:18:05 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-keepalived-nfs-cephfs-compute-0-ewssxv[95785]: Thu Dec 11 09:18:05 2025: (VI_0) received an invalid passwd!
Dec 11 09:18:06 compute-0 ceph-mon[74426]: pgmap v125: 353 pgs: 353 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:18:06 compute-0 ceph-mon[74426]: 8.d scrub starts
Dec 11 09:18:06 compute-0 ceph-mon[74426]: 8.d scrub ok
Dec 11 09:18:06 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Dec 11 09:18:06 compute-0 ceph-mon[74426]: osdmap e102: 3 total, 3 up, 3 in
Dec 11 09:18:06 compute-0 ceph-mon[74426]: 10.9 scrub starts
Dec 11 09:18:06 compute-0 ceph-mon[74426]: 10.9 scrub ok
Dec 11 09:18:06 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:18:06 compute-0 ceph-mon[74426]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Dec 11 09:18:06 compute-0 ceph-mon[74426]: rocksdb: (Original Log Time 2025/12/11-09:18:06.227802) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 11 09:18:06 compute-0 ceph-mon[74426]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Dec 11 09:18:06 compute-0 ceph-mon[74426]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765444686228000, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 960, "num_deletes": 250, "total_data_size": 1106868, "memory_usage": 1128224, "flush_reason": "Manual Compaction"}
Dec 11 09:18:06 compute-0 ceph-mon[74426]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Dec 11 09:18:06 compute-0 ceph-mon[74426]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765444686237279, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 1056110, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7816, "largest_seqno": 8775, "table_properties": {"data_size": 1051173, "index_size": 2269, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12904, "raw_average_key_size": 20, "raw_value_size": 1040079, "raw_average_value_size": 1640, "num_data_blocks": 100, "num_entries": 634, "num_filter_entries": 634, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765444662, "oldest_key_time": 1765444662, "file_creation_time": 1765444686, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1ac8294-d02a-459a-8058-7f05c4f78e7d", "db_session_id": "U2WUY0PZPH8WS8N5I572", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Dec 11 09:18:06 compute-0 ceph-mon[74426]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 9529 microseconds, and 5390 cpu microseconds.
Dec 11 09:18:06 compute-0 ceph-mon[74426]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 11 09:18:06 compute-0 ceph-mon[74426]: rocksdb: (Original Log Time 2025/12/11-09:18:06.237381) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 1056110 bytes OK
Dec 11 09:18:06 compute-0 ceph-mon[74426]: rocksdb: (Original Log Time 2025/12/11-09:18:06.237420) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Dec 11 09:18:06 compute-0 ceph-mon[74426]: rocksdb: (Original Log Time 2025/12/11-09:18:06.239149) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Dec 11 09:18:06 compute-0 ceph-mon[74426]: rocksdb: (Original Log Time 2025/12/11-09:18:06.239173) EVENT_LOG_v1 {"time_micros": 1765444686239168, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 11 09:18:06 compute-0 ceph-mon[74426]: rocksdb: (Original Log Time 2025/12/11-09:18:06.239190) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 11 09:18:06 compute-0 ceph-mon[74426]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 1101698, prev total WAL file size 1101698, number of live WAL files 2.
Dec 11 09:18:06 compute-0 ceph-mon[74426]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 11 09:18:06 compute-0 ceph-mon[74426]: rocksdb: (Original Log Time 2025/12/11-09:18:06.239966) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323531' seq:0, type:0; will stop at (end)
Dec 11 09:18:06 compute-0 ceph-mon[74426]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 11 09:18:06 compute-0 ceph-mon[74426]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(1031KB)], [20(12MB)]
Dec 11 09:18:06 compute-0 ceph-mon[74426]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765444686240112, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 13667427, "oldest_snapshot_seqno": -1}
Dec 11 09:18:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-keepalived-rgw-default-compute-0-gxvbmc[97529]: Thu Dec 11 09:18:06 2025: (VI_0) Entering MASTER STATE
Dec 11 09:18:06 compute-0 ceph-mon[74426]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3543 keys, 13239515 bytes, temperature: kUnknown
Dec 11 09:18:06 compute-0 ceph-mon[74426]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765444686349097, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 13239515, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13210540, "index_size": 19009, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8901, "raw_key_size": 90945, "raw_average_key_size": 25, "raw_value_size": 13140135, "raw_average_value_size": 3708, "num_data_blocks": 826, "num_entries": 3543, "num_filter_entries": 3543, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765444346, "oldest_key_time": 0, "file_creation_time": 1765444686, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1ac8294-d02a-459a-8058-7f05c4f78e7d", "db_session_id": "U2WUY0PZPH8WS8N5I572", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Dec 11 09:18:06 compute-0 ceph-mon[74426]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 11 09:18:06 compute-0 ceph-mon[74426]: rocksdb: (Original Log Time 2025/12/11-09:18:06.349445) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 13239515 bytes
Dec 11 09:18:06 compute-0 ceph-mon[74426]: rocksdb: (Original Log Time 2025/12/11-09:18:06.351246) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 125.3 rd, 121.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 12.0 +0.0 blob) out(12.6 +0.0 blob), read-write-amplify(25.5) write-amplify(12.5) OK, records in: 4064, records dropped: 521 output_compression: NoCompression
Dec 11 09:18:06 compute-0 ceph-mon[74426]: rocksdb: (Original Log Time 2025/12/11-09:18:06.351271) EVENT_LOG_v1 {"time_micros": 1765444686351260, "job": 6, "event": "compaction_finished", "compaction_time_micros": 109083, "compaction_time_cpu_micros": 44595, "output_level": 6, "num_output_files": 1, "total_output_size": 13239515, "num_input_records": 4064, "num_output_records": 3543, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 11 09:18:06 compute-0 ceph-mon[74426]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 11 09:18:06 compute-0 ceph-mon[74426]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765444686351629, "job": 6, "event": "table_file_deletion", "file_number": 22}
Dec 11 09:18:06 compute-0 ceph-mon[74426]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 11 09:18:06 compute-0 ceph-mon[74426]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765444686353805, "job": 6, "event": "table_file_deletion", "file_number": 20}
Dec 11 09:18:06 compute-0 ceph-mon[74426]: rocksdb: (Original Log Time 2025/12/11-09:18:06.239772) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 11 09:18:06 compute-0 ceph-mon[74426]: rocksdb: (Original Log Time 2025/12/11-09:18:06.353902) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 11 09:18:06 compute-0 ceph-mon[74426]: rocksdb: (Original Log Time 2025/12/11-09:18:06.353910) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 11 09:18:06 compute-0 ceph-mon[74426]: rocksdb: (Original Log Time 2025/12/11-09:18:06.353912) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 11 09:18:06 compute-0 ceph-mon[74426]: rocksdb: (Original Log Time 2025/12/11-09:18:06.353914) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 11 09:18:06 compute-0 ceph-mon[74426]: rocksdb: (Original Log Time 2025/12/11-09:18:06.353918) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 11 09:18:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:06 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef500032f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:06 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:18:06 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000031s ======
Dec 11 09:18:06 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:18:06.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 11 09:18:06 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v127: 353 pgs: 353 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:18:06 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0)
Dec 11 09:18:06 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Dec 11 09:18:06 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:18:06 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:06 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:18:06.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:07 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:07 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef54004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:07 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Dec 11 09:18:07 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Dec 11 09:18:07 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Dec 11 09:18:07 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Dec 11 09:18:07 compute-0 ceph-mon[74426]: 10.3 scrub starts
Dec 11 09:18:07 compute-0 ceph-mon[74426]: 10.3 scrub ok
Dec 11 09:18:07 compute-0 ceph-mon[74426]: 10.6 scrub starts
Dec 11 09:18:07 compute-0 ceph-mon[74426]: 10.6 scrub ok
Dec 11 09:18:07 compute-0 ceph-mon[74426]: pgmap v127: 353 pgs: 353 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:18:07 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Dec 11 09:18:07 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:07 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef70004440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:08 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Dec 11 09:18:08 compute-0 ceph-mon[74426]: 10.19 scrub starts
Dec 11 09:18:08 compute-0 ceph-mon[74426]: 10.19 scrub ok
Dec 11 09:18:08 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Dec 11 09:18:08 compute-0 ceph-mon[74426]: osdmap e103: 3 total, 3 up, 3 in
Dec 11 09:18:08 compute-0 ceph-mon[74426]: 10.b scrub starts
Dec 11 09:18:08 compute-0 ceph-mon[74426]: 10.b scrub ok
Dec 11 09:18:08 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Dec 11 09:18:08 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Dec 11 09:18:08 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:08 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef60003490 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:08 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:18:08 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000031s ======
Dec 11 09:18:08 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:18:08.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 11 09:18:08 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v130: 353 pgs: 2 unknown, 351 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:18:08 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:18:08 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:08 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:18:08.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:09 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:09 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef500032f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:09 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:09 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef54004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:09 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Dec 11 09:18:09 compute-0 ceph-mon[74426]: osdmap e104: 3 total, 3 up, 3 in
Dec 11 09:18:09 compute-0 ceph-mon[74426]: pgmap v130: 353 pgs: 2 unknown, 351 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:18:09 compute-0 ceph-mon[74426]: 10.10 scrub starts
Dec 11 09:18:09 compute-0 ceph-mon[74426]: 10.10 scrub ok
Dec 11 09:18:09 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Dec 11 09:18:09 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Dec 11 09:18:10 compute-0 podman[97627]: 2025-12-11 09:18:10.437617946 +0000 UTC m=+6.650708910 volume create b9aaf53f6194658727d1265b50b087b86e93d7f9b384100040200861374a2470
Dec 11 09:18:10 compute-0 podman[97627]: 2025-12-11 09:18:10.443900079 +0000 UTC m=+6.656991043 container create 8875531f60a509224dd3c600a039265f5af344a1ff3acd903d3f27b672d046af (image=quay.io/prometheus/prometheus:v2.51.0, name=boring_jemison, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 09:18:10 compute-0 podman[97627]: 2025-12-11 09:18:10.42348586 +0000 UTC m=+6.636576844 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Dec 11 09:18:10 compute-0 systemd[1]: Started libpod-conmon-8875531f60a509224dd3c600a039265f5af344a1ff3acd903d3f27b672d046af.scope.
Dec 11 09:18:10 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:18:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a7535a4e74b8308d8607d432a01f4b5bc241d36e2c2e9a199153c6a20d875d2/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Dec 11 09:18:10 compute-0 podman[97627]: 2025-12-11 09:18:10.532630185 +0000 UTC m=+6.745721179 container init 8875531f60a509224dd3c600a039265f5af344a1ff3acd903d3f27b672d046af (image=quay.io/prometheus/prometheus:v2.51.0, name=boring_jemison, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 09:18:10 compute-0 podman[97627]: 2025-12-11 09:18:10.540978273 +0000 UTC m=+6.754069227 container start 8875531f60a509224dd3c600a039265f5af344a1ff3acd903d3f27b672d046af (image=quay.io/prometheus/prometheus:v2.51.0, name=boring_jemison, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 09:18:10 compute-0 boring_jemison[97961]: 65534 65534
Dec 11 09:18:10 compute-0 podman[97627]: 2025-12-11 09:18:10.545254464 +0000 UTC m=+6.758345528 container attach 8875531f60a509224dd3c600a039265f5af344a1ff3acd903d3f27b672d046af (image=quay.io/prometheus/prometheus:v2.51.0, name=boring_jemison, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 09:18:10 compute-0 systemd[1]: libpod-8875531f60a509224dd3c600a039265f5af344a1ff3acd903d3f27b672d046af.scope: Deactivated successfully.
Dec 11 09:18:10 compute-0 podman[97627]: 2025-12-11 09:18:10.546160342 +0000 UTC m=+6.759251306 container died 8875531f60a509224dd3c600a039265f5af344a1ff3acd903d3f27b672d046af (image=quay.io/prometheus/prometheus:v2.51.0, name=boring_jemison, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 09:18:10 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Dec 11 09:18:10 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Dec 11 09:18:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a7535a4e74b8308d8607d432a01f4b5bc241d36e2c2e9a199153c6a20d875d2-merged.mount: Deactivated successfully.
Dec 11 09:18:10 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Dec 11 09:18:10 compute-0 ceph-mon[74426]: osdmap e105: 3 total, 3 up, 3 in
Dec 11 09:18:10 compute-0 ceph-mon[74426]: 10.1b scrub starts
Dec 11 09:18:10 compute-0 ceph-mon[74426]: 10.1b scrub ok
Dec 11 09:18:10 compute-0 podman[97627]: 2025-12-11 09:18:10.59637288 +0000 UTC m=+6.809463864 container remove 8875531f60a509224dd3c600a039265f5af344a1ff3acd903d3f27b672d046af (image=quay.io/prometheus/prometheus:v2.51.0, name=boring_jemison, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 09:18:10 compute-0 podman[97627]: 2025-12-11 09:18:10.600326593 +0000 UTC m=+6.813417557 volume remove b9aaf53f6194658727d1265b50b087b86e93d7f9b384100040200861374a2470
Dec 11 09:18:10 compute-0 systemd[1]: libpod-conmon-8875531f60a509224dd3c600a039265f5af344a1ff3acd903d3f27b672d046af.scope: Deactivated successfully.
Dec 11 09:18:10 compute-0 podman[97978]: 2025-12-11 09:18:10.692532915 +0000 UTC m=+0.056713229 volume create 6c1062536cfad2be67bab6952591ff9a8344eda61c2db5d796a5ea26ec817c5e
Dec 11 09:18:10 compute-0 podman[97978]: 2025-12-11 09:18:10.706041702 +0000 UTC m=+0.070222026 container create adddc1ab5d08c76479b1929d25e7e1ee645b8fce60f9a5559528a97654c1fbc5 (image=quay.io/prometheus/prometheus:v2.51.0, name=priceless_hawking, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 09:18:10 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:10 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef70004440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:10 compute-0 systemd[1]: Started libpod-conmon-adddc1ab5d08c76479b1929d25e7e1ee645b8fce60f9a5559528a97654c1fbc5.scope.
Dec 11 09:18:10 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:18:10 compute-0 podman[97978]: 2025-12-11 09:18:10.673298901 +0000 UTC m=+0.037479225 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Dec 11 09:18:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88ca7d45022abab1cd4ed984922d62830df55869ee69ea9d6574749ab80221a3/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Dec 11 09:18:10 compute-0 podman[97978]: 2025-12-11 09:18:10.781200769 +0000 UTC m=+0.145381103 container init adddc1ab5d08c76479b1929d25e7e1ee645b8fce60f9a5559528a97654c1fbc5 (image=quay.io/prometheus/prometheus:v2.51.0, name=priceless_hawking, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 09:18:10 compute-0 podman[97978]: 2025-12-11 09:18:10.786793951 +0000 UTC m=+0.150974255 container start adddc1ab5d08c76479b1929d25e7e1ee645b8fce60f9a5559528a97654c1fbc5 (image=quay.io/prometheus/prometheus:v2.51.0, name=priceless_hawking, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 09:18:10 compute-0 priceless_hawking[97995]: 65534 65534
Dec 11 09:18:10 compute-0 systemd[1]: libpod-adddc1ab5d08c76479b1929d25e7e1ee645b8fce60f9a5559528a97654c1fbc5.scope: Deactivated successfully.
Dec 11 09:18:10 compute-0 podman[97978]: 2025-12-11 09:18:10.789773403 +0000 UTC m=+0.153953707 container attach adddc1ab5d08c76479b1929d25e7e1ee645b8fce60f9a5559528a97654c1fbc5 (image=quay.io/prometheus/prometheus:v2.51.0, name=priceless_hawking, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 09:18:10 compute-0 podman[97978]: 2025-12-11 09:18:10.790526027 +0000 UTC m=+0.154706361 container died adddc1ab5d08c76479b1929d25e7e1ee645b8fce60f9a5559528a97654c1fbc5 (image=quay.io/prometheus/prometheus:v2.51.0, name=priceless_hawking, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 09:18:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-88ca7d45022abab1cd4ed984922d62830df55869ee69ea9d6574749ab80221a3-merged.mount: Deactivated successfully.
Dec 11 09:18:10 compute-0 podman[97978]: 2025-12-11 09:18:10.825287538 +0000 UTC m=+0.189467842 container remove adddc1ab5d08c76479b1929d25e7e1ee645b8fce60f9a5559528a97654c1fbc5 (image=quay.io/prometheus/prometheus:v2.51.0, name=priceless_hawking, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 09:18:10 compute-0 podman[97978]: 2025-12-11 09:18:10.829093625 +0000 UTC m=+0.193273969 volume remove 6c1062536cfad2be67bab6952591ff9a8344eda61c2db5d796a5ea26ec817c5e
Dec 11 09:18:10 compute-0 systemd[1]: libpod-conmon-adddc1ab5d08c76479b1929d25e7e1ee645b8fce60f9a5559528a97654c1fbc5.scope: Deactivated successfully.
Dec 11 09:18:10 compute-0 systemd[1]: Reloading.
Dec 11 09:18:10 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:18:10 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000031s ======
Dec 11 09:18:10 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:18:10.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 11 09:18:10 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v133: 353 pgs: 2 unknown, 351 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:18:10 compute-0 systemd-rc-local-generator[98041]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:18:10 compute-0 systemd-sysv-generator[98044]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:18:10 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:18:10 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:10 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:18:10.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:11 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:11 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef60003490 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:11 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:18:11 compute-0 systemd[1]: Reloading.
Dec 11 09:18:11 compute-0 systemd-rc-local-generator[98079]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:18:11 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:11 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef500032f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:11 compute-0 systemd-sysv-generator[98085]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:18:11 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Dec 11 09:18:11 compute-0 systemd[1]: Starting Ceph prometheus.compute-0 for 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060...
Dec 11 09:18:11 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Dec 11 09:18:11 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Dec 11 09:18:11 compute-0 ceph-mon[74426]: osdmap e106: 3 total, 3 up, 3 in
Dec 11 09:18:11 compute-0 ceph-mon[74426]: pgmap v133: 353 pgs: 2 unknown, 351 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:18:12 compute-0 podman[98139]: 2025-12-11 09:18:12.027269386 +0000 UTC m=+0.038948921 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Dec 11 09:18:12 compute-0 podman[98139]: 2025-12-11 09:18:12.356915149 +0000 UTC m=+0.368594624 container create 90c28ad7bc328840534f9285d34005394d82c163694e46ece6c6c1f2d03e7fe2 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 09:18:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3e907d0a54e115702b5438642cb6eb9444a55a34ff8e15379bd6ce8162b9cc6/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Dec 11 09:18:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3e907d0a54e115702b5438642cb6eb9444a55a34ff8e15379bd6ce8162b9cc6/merged/etc/prometheus supports timestamps until 2038 (0x7fffffff)
Dec 11 09:18:12 compute-0 podman[98139]: 2025-12-11 09:18:12.440220498 +0000 UTC m=+0.451900003 container init 90c28ad7bc328840534f9285d34005394d82c163694e46ece6c6c1f2d03e7fe2 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 09:18:12 compute-0 podman[98139]: 2025-12-11 09:18:12.446686137 +0000 UTC m=+0.458365642 container start 90c28ad7bc328840534f9285d34005394d82c163694e46ece6c6c1f2d03e7fe2 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 09:18:12 compute-0 bash[98139]: 90c28ad7bc328840534f9285d34005394d82c163694e46ece6c6c1f2d03e7fe2
Dec 11 09:18:12 compute-0 systemd[1]: Started Ceph prometheus.compute-0 for 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060.
Dec 11 09:18:12 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-prometheus-compute-0[98154]: ts=2025-12-11T09:18:12.503Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.0, branch=HEAD, revision=c05c15512acb675e3f6cd662a6727854e93fc024)"
Dec 11 09:18:12 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-prometheus-compute-0[98154]: ts=2025-12-11T09:18:12.503Z caller=main.go:622 level=info build_context="(go=go1.22.1, platform=linux/amd64, user=root@b5723e458358, date=20240319-10:54:45, tags=netgo,builtinassets,stringlabels)"
Dec 11 09:18:12 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-prometheus-compute-0[98154]: ts=2025-12-11T09:18:12.503Z caller=main.go:623 level=info host_details="(Linux 5.14.0-648.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Dec 5 11:18:23 UTC 2025 x86_64 compute-0 (none))"
Dec 11 09:18:12 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-prometheus-compute-0[98154]: ts=2025-12-11T09:18:12.503Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)"
Dec 11 09:18:12 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-prometheus-compute-0[98154]: ts=2025-12-11T09:18:12.503Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)"
Dec 11 09:18:12 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-prometheus-compute-0[98154]: ts=2025-12-11T09:18:12.506Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=192.168.122.100:9095
Dec 11 09:18:12 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-prometheus-compute-0[98154]: ts=2025-12-11T09:18:12.507Z caller=main.go:1129 level=info msg="Starting TSDB ..."
Dec 11 09:18:12 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-prometheus-compute-0[98154]: ts=2025-12-11T09:18:12.510Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=192.168.122.100:9095
Dec 11 09:18:12 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-prometheus-compute-0[98154]: ts=2025-12-11T09:18:12.510Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=192.168.122.100:9095
Dec 11 09:18:12 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-prometheus-compute-0[98154]: ts=2025-12-11T09:18:12.515Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any"
Dec 11 09:18:12 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-prometheus-compute-0[98154]: ts=2025-12-11T09:18:12.515Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=9.79µs
Dec 11 09:18:12 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-prometheus-compute-0[98154]: ts=2025-12-11T09:18:12.515Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while"
Dec 11 09:18:12 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-prometheus-compute-0[98154]: ts=2025-12-11T09:18:12.516Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0
Dec 11 09:18:12 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-prometheus-compute-0[98154]: ts=2025-12-11T09:18:12.516Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=323.52µs wal_replay_duration=496.206µs wbl_replay_duration=280ns total_replay_duration=856.207µs
Dec 11 09:18:12 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-prometheus-compute-0[98154]: ts=2025-12-11T09:18:12.518Z caller=main.go:1150 level=info fs_type=XFS_SUPER_MAGIC
Dec 11 09:18:12 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-prometheus-compute-0[98154]: ts=2025-12-11T09:18:12.518Z caller=main.go:1153 level=info msg="TSDB started"
Dec 11 09:18:12 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-prometheus-compute-0[98154]: ts=2025-12-11T09:18:12.518Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml
Dec 11 09:18:12 compute-0 sudo[97563]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:12 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 11 09:18:12 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-prometheus-compute-0[98154]: ts=2025-12-11T09:18:12.554Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=35.66456ms db_storage=10.461µs remote_storage=3.12µs web_handler=1.59µs query_engine=2.01µs scrape=6.603514ms scrape_sd=337.841µs notify=31.101µs notify_sd=19.55µs rules=25.862928ms tracing=19.061µs
Dec 11 09:18:12 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-prometheus-compute-0[98154]: ts=2025-12-11T09:18:12.556Z caller=main.go:1114 level=info msg="Server is ready to receive web requests."
Dec 11 09:18:12 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-prometheus-compute-0[98154]: ts=2025-12-11T09:18:12.556Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..."
Dec 11 09:18:12 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:12 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 11 09:18:12 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:12 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Dec 11 09:18:12 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:12 compute-0 ceph-mgr[74715]: [progress INFO root] complete: finished ev 5540e681-1069-4997-8004-6d8a509f004b (Updating prometheus deployment (+1 -> 1))
Dec 11 09:18:12 compute-0 ceph-mgr[74715]: [progress INFO root] Completed event 5540e681-1069-4997-8004-6d8a509f004b (Updating prometheus deployment (+1 -> 1)) in 10 seconds
Dec 11 09:18:12 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "prometheus"} v 0)
Dec 11 09:18:12 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Dec 11 09:18:12 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:12 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef54004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:12 compute-0 ceph-mon[74426]: osdmap e107: 3 total, 3 up, 3 in
Dec 11 09:18:12 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:12 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:12 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:12 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Dec 11 09:18:12 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:18:12 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 11 09:18:12 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:18:12.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 11 09:18:12 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v135: 353 pgs: 2 unknown, 351 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:18:12 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:18:12 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000031s ======
Dec 11 09:18:12 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:18:12.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 11 09:18:13 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:13 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef70004440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:13 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:13 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef60003490 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:13 compute-0 ceph-mgr[74715]: [progress INFO root] Writing back 31 completed events
Dec 11 09:18:13 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 11 09:18:14 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:14 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:14 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef500032f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:14 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:18:14 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000031s ======
Dec 11 09:18:14 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:18:14.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 11 09:18:14 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v136: 353 pgs: 353 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 1 objects/s recovering
Dec 11 09:18:14 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:18:14 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000031s ======
Dec 11 09:18:14 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:18:14.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 11 09:18:15 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:15 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef54004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:15 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0)
Dec 11 09:18:15 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Dec 11 09:18:15 compute-0 ceph-mon[74426]: pgmap v135: 353 pgs: 2 unknown, 351 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:18:15 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Dec 11 09:18:15 compute-0 ceph-mgr[74715]: mgr handle_mgr_map respawning because set of enabled modules changed!
Dec 11 09:18:15 compute-0 ceph-mgr[74715]: mgr respawn  e: '/usr/bin/ceph-mgr'
Dec 11 09:18:15 compute-0 ceph-mgr[74715]: mgr respawn  0: '/usr/bin/ceph-mgr'
Dec 11 09:18:15 compute-0 ceph-mgr[74715]: mgr respawn  1: '-n'
Dec 11 09:18:15 compute-0 ceph-mgr[74715]: mgr respawn  2: 'mgr.compute-0.wwpcae'
Dec 11 09:18:15 compute-0 ceph-mgr[74715]: mgr respawn  3: '-f'
Dec 11 09:18:15 compute-0 ceph-mgr[74715]: mgr respawn  4: '--setuser'
Dec 11 09:18:15 compute-0 ceph-mgr[74715]: mgr respawn  5: 'ceph'
Dec 11 09:18:15 compute-0 ceph-mgr[74715]: mgr respawn  6: '--setgroup'
Dec 11 09:18:15 compute-0 ceph-mgr[74715]: mgr respawn  7: 'ceph'
Dec 11 09:18:15 compute-0 ceph-mgr[74715]: mgr respawn  8: '--default-log-to-file=false'
Dec 11 09:18:15 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : mgrmap e30: compute-0.wwpcae(active, since 2m), standbys: compute-1.unesvp, compute-2.uiimcn
Dec 11 09:18:15 compute-0 sshd-session[90531]: Connection closed by 192.168.122.100 port 51840
Dec 11 09:18:15 compute-0 sshd-session[90498]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 11 09:18:15 compute-0 systemd[1]: session-35.scope: Deactivated successfully.
Dec 11 09:18:15 compute-0 systemd[1]: session-35.scope: Consumed 1min 5.068s CPU time.
Dec 11 09:18:15 compute-0 systemd-logind[792]: Session 35 logged out. Waiting for processes to exit.
Dec 11 09:18:15 compute-0 systemd-logind[792]: Removed session 35.
Dec 11 09:18:15 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: ignoring --setuser ceph since I am not root
Dec 11 09:18:15 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: ignoring --setgroup ceph since I am not root
Dec 11 09:18:15 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:15 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef54004050 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:15 compute-0 ceph-mgr[74715]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec 11 09:18:15 compute-0 ceph-mgr[74715]: pidfile_write: ignore empty --pid-file
Dec 11 09:18:15 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'alerts'
Dec 11 09:18:15 compute-0 sudo[98225]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lckreitlixcfgubpnlalqwnsgqszgrpi ; /usr/bin/python3'
Dec 11 09:18:15 compute-0 sudo[98225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:18:15 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:18:15.561+0000 7f0a4e52e140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 11 09:18:15 compute-0 ceph-mgr[74715]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 11 09:18:15 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'balancer'
Dec 11 09:18:15 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Dec 11 09:18:15 compute-0 python3[98227]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps --daemon-type rgw --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:18:15 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Dec 11 09:18:15 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Dec 11 09:18:15 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Dec 11 09:18:15 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:18:15.682+0000 7f0a4e52e140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 11 09:18:15 compute-0 ceph-mgr[74715]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 11 09:18:15 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'cephadm'
Dec 11 09:18:15 compute-0 podman[98228]: 2025-12-11 09:18:15.731191161 +0000 UTC m=+0.071213657 container create a5c891a3dcdbcf39bb4d10fe019a7d92ce12977877671f46aa8033348381a790 (image=quay.io/ceph/ceph:v19, name=serene_gates, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325)
Dec 11 09:18:15 compute-0 systemd[1]: Started libpod-conmon-a5c891a3dcdbcf39bb4d10fe019a7d92ce12977877671f46aa8033348381a790.scope.
Dec 11 09:18:15 compute-0 podman[98228]: 2025-12-11 09:18:15.710459912 +0000 UTC m=+0.050482428 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:18:15 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:18:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6df6c826b26910e2f7aaaa3d1ce54adac300147aa9410186aea7821700b741a8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:18:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6df6c826b26910e2f7aaaa3d1ce54adac300147aa9410186aea7821700b741a8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:18:15 compute-0 podman[98228]: 2025-12-11 09:18:15.913186532 +0000 UTC m=+0.253209048 container init a5c891a3dcdbcf39bb4d10fe019a7d92ce12977877671f46aa8033348381a790 (image=quay.io/ceph/ceph:v19, name=serene_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec 11 09:18:15 compute-0 podman[98228]: 2025-12-11 09:18:15.924824061 +0000 UTC m=+0.264846557 container start a5c891a3dcdbcf39bb4d10fe019a7d92ce12977877671f46aa8033348381a790 (image=quay.io/ceph/ceph:v19, name=serene_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 11 09:18:15 compute-0 podman[98228]: 2025-12-11 09:18:15.929314729 +0000 UTC m=+0.269337225 container attach a5c891a3dcdbcf39bb4d10fe019a7d92ce12977877671f46aa8033348381a790 (image=quay.io/ceph/ceph:v19, name=serene_gates, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:18:15 compute-0 serene_gates[98244]: ERROR: invalid flag --daemon-type
Dec 11 09:18:16 compute-0 systemd[1]: libpod-a5c891a3dcdbcf39bb4d10fe019a7d92ce12977877671f46aa8033348381a790.scope: Deactivated successfully.
Dec 11 09:18:16 compute-0 podman[98228]: 2025-12-11 09:18:16.003244239 +0000 UTC m=+0.343266745 container died a5c891a3dcdbcf39bb4d10fe019a7d92ce12977877671f46aa8033348381a790 (image=quay.io/ceph/ceph:v19, name=serene_gates, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:18:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-6df6c826b26910e2f7aaaa3d1ce54adac300147aa9410186aea7821700b741a8-merged.mount: Deactivated successfully.
Dec 11 09:18:16 compute-0 podman[98228]: 2025-12-11 09:18:16.059771161 +0000 UTC m=+0.399793657 container remove a5c891a3dcdbcf39bb4d10fe019a7d92ce12977877671f46aa8033348381a790 (image=quay.io/ceph/ceph:v19, name=serene_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:18:16 compute-0 sudo[98225]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:16 compute-0 systemd[1]: libpod-conmon-a5c891a3dcdbcf39bb4d10fe019a7d92ce12977877671f46aa8033348381a790.scope: Deactivated successfully.
Dec 11 09:18:16 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:18:16 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Dec 11 09:18:16 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Dec 11 09:18:16 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Dec 11 09:18:16 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:16 compute-0 ceph-mon[74426]: pgmap v136: 353 pgs: 353 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 1 objects/s recovering
Dec 11 09:18:16 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Dec 11 09:18:16 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Dec 11 09:18:16 compute-0 ceph-mon[74426]: mgrmap e30: compute-0.wwpcae(active, since 2m), standbys: compute-1.unesvp, compute-2.uiimcn
Dec 11 09:18:16 compute-0 ceph-mon[74426]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Dec 11 09:18:16 compute-0 ceph-mon[74426]: osdmap e108: 3 total, 3 up, 3 in
Dec 11 09:18:16 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'crash'
Dec 11 09:18:16 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:16 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef60003490 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:16 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:18:16.756+0000 7f0a4e52e140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 11 09:18:16 compute-0 ceph-mgr[74715]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 11 09:18:16 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'dashboard'
Dec 11 09:18:16 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:18:16 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:16 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:18:16.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:16 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:18:16 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:16 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:18:16.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:17 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:17 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef500032f0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:17 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Dec 11 09:18:17 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Dec 11 09:18:17 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Dec 11 09:18:17 compute-0 ceph-mon[74426]: osdmap e109: 3 total, 3 up, 3 in
Dec 11 09:18:17 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:17 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef54004050 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:17 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'devicehealth'
Dec 11 09:18:17 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:18:17.561+0000 7f0a4e52e140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 11 09:18:17 compute-0 ceph-mgr[74715]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 11 09:18:17 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'diskprediction_local'
Dec 11 09:18:17 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 11 09:18:17 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 11 09:18:17 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]:   from numpy import show_config as show_numpy_config
Dec 11 09:18:17 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:18:17.803+0000 7f0a4e52e140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 11 09:18:17 compute-0 ceph-mgr[74715]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 11 09:18:17 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'influx'
Dec 11 09:18:17 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:18:17.890+0000 7f0a4e52e140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 11 09:18:17 compute-0 ceph-mgr[74715]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 11 09:18:17 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'insights'
Dec 11 09:18:17 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'iostat'
Dec 11 09:18:18 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:18:18.050+0000 7f0a4e52e140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 11 09:18:18 compute-0 ceph-mgr[74715]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 11 09:18:18 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'k8sevents'
Dec 11 09:18:18 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Dec 11 09:18:18 compute-0 ceph-mon[74426]: osdmap e110: 3 total, 3 up, 3 in
Dec 11 09:18:18 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Dec 11 09:18:18 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Dec 11 09:18:18 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'localpool'
Dec 11 09:18:18 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'mds_autoscaler'
Dec 11 09:18:18 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:18 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef54004050 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:18 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'mirroring'
Dec 11 09:18:18 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:18:18 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:18 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:18:18.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:18 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'nfs'
Dec 11 09:18:18 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:18:18 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:18 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:18:18.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:19 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:19 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef60003490 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:19 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:18:19.226+0000 7f0a4e52e140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 11 09:18:19 compute-0 ceph-mgr[74715]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 11 09:18:19 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'orchestrator'
Dec 11 09:18:19 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Dec 11 09:18:19 compute-0 ceph-mon[74426]: osdmap e111: 3 total, 3 up, 3 in
Dec 11 09:18:19 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Dec 11 09:18:19 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Dec 11 09:18:19 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:19 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef500032f0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:19 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:18:19.476+0000 7f0a4e52e140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 11 09:18:19 compute-0 ceph-mgr[74715]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 11 09:18:19 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'osd_perf_query'
Dec 11 09:18:19 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:18:19.572+0000 7f0a4e52e140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 11 09:18:19 compute-0 ceph-mgr[74715]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 11 09:18:19 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'osd_support'
Dec 11 09:18:19 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:18:19.650+0000 7f0a4e52e140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 11 09:18:19 compute-0 ceph-mgr[74715]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 11 09:18:19 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'pg_autoscaler'
Dec 11 09:18:19 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:18:19.736+0000 7f0a4e52e140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 11 09:18:19 compute-0 ceph-mgr[74715]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 11 09:18:19 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'progress'
Dec 11 09:18:19 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:18:19.820+0000 7f0a4e52e140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 11 09:18:19 compute-0 ceph-mgr[74715]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 11 09:18:19 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'prometheus'
Dec 11 09:18:20 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:18:20.264+0000 7f0a4e52e140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 11 09:18:20 compute-0 ceph-mgr[74715]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 11 09:18:20 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'rbd_support'
Dec 11 09:18:20 compute-0 ceph-mon[74426]: osdmap e112: 3 total, 3 up, 3 in
Dec 11 09:18:20 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:18:20.387+0000 7f0a4e52e140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 11 09:18:20 compute-0 ceph-mgr[74715]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 11 09:18:20 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'restful'
Dec 11 09:18:20 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'rgw'
Dec 11 09:18:20 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:20 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef4c002690 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:20 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:18:20.932+0000 7f0a4e52e140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 11 09:18:20 compute-0 ceph-mgr[74715]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 11 09:18:20 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'rook'
Dec 11 09:18:20 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:18:20 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:20 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:18:20.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:20 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:18:20 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:20 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:18:20.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:21 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:21 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef70004440 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:21 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:18:21 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:21 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef60003490 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:21 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:18:21.639+0000 7f0a4e52e140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 11 09:18:21 compute-0 ceph-mgr[74715]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 11 09:18:21 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'selftest'
Dec 11 09:18:21 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:18:21.727+0000 7f0a4e52e140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 11 09:18:21 compute-0 ceph-mgr[74715]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 11 09:18:21 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'snap_schedule'
Dec 11 09:18:21 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:18:21.817+0000 7f0a4e52e140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 11 09:18:21 compute-0 ceph-mgr[74715]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 11 09:18:21 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'stats'
Dec 11 09:18:21 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'status'
Dec 11 09:18:21 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:18:21.988+0000 7f0a4e52e140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec 11 09:18:21 compute-0 ceph-mgr[74715]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec 11 09:18:21 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'telegraf'
Dec 11 09:18:22 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:18:22.072+0000 7f0a4e52e140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 11 09:18:22 compute-0 ceph-mgr[74715]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 11 09:18:22 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'telemetry'
Dec 11 09:18:22 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.unesvp restarted
Dec 11 09:18:22 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.unesvp started
Dec 11 09:18:22 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:18:22.310+0000 7f0a4e52e140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 11 09:18:22 compute-0 ceph-mgr[74715]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 11 09:18:22 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'test_orchestrator'
Dec 11 09:18:22 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:18:22.592+0000 7f0a4e52e140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 11 09:18:22 compute-0 ceph-mgr[74715]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 11 09:18:22 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'volumes'
Dec 11 09:18:22 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : mgrmap e31: compute-0.wwpcae(active, since 2m), standbys: compute-2.uiimcn, compute-1.unesvp
Dec 11 09:18:22 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:22 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef500032f0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:22 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.uiimcn restarted
Dec 11 09:18:22 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.uiimcn started
Dec 11 09:18:22 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:18:22.943+0000 7f0a4e52e140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 11 09:18:22 compute-0 ceph-mgr[74715]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 11 09:18:22 compute-0 ceph-mgr[74715]: mgr[py] Loading python module 'zabbix'
Dec 11 09:18:22 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:18:22 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:22 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:18:22.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:22 compute-0 ceph-mon[74426]: Standby manager daemon compute-1.unesvp restarted
Dec 11 09:18:22 compute-0 ceph-mon[74426]: Standby manager daemon compute-1.unesvp started
Dec 11 09:18:22 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:18:22 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000031s ======
Dec 11 09:18:22 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:18:22.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 11 09:18:23 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:18:23.028+0000 7f0a4e52e140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 11 09:18:23 compute-0 ceph-mon[74426]: log_channel(cluster) log [INF] : Active manager daemon compute-0.wwpcae restarted
Dec 11 09:18:23 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Dec 11 09:18:23 compute-0 ceph-mon[74426]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.wwpcae
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: ms_deliver_dispatch: unhandled message 0x556b71811860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec 11 09:18:23 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: mgr handle_mgr_map Activating!
Dec 11 09:18:23 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Dec 11 09:18:23 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : mgrmap e32: compute-0.wwpcae(active, starting, since 0.027586s), standbys: compute-2.uiimcn, compute-1.unesvp
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: mgr handle_mgr_map I am now activating
Dec 11 09:18:23 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec 11 09:18:23 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 11 09:18:23 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 11 09:18:23 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 11 09:18:23 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec 11 09:18:23 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 11 09:18:23 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.ejykhm"} v 0)
Dec 11 09:18:23 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.ejykhm"}]: dispatch
Dec 11 09:18:23 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).mds e9 all = 0
Dec 11 09:18:23 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.abebdg"} v 0)
Dec 11 09:18:23 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.abebdg"}]: dispatch
Dec 11 09:18:23 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).mds e9 all = 0
Dec 11 09:18:23 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-1.hifxsh"} v 0)
Dec 11 09:18:23 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.hifxsh"}]: dispatch
Dec 11 09:18:23 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).mds e9 all = 0
Dec 11 09:18:23 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.wwpcae", "id": "compute-0.wwpcae"} v 0)
Dec 11 09:18:23 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mgr metadata", "who": "compute-0.wwpcae", "id": "compute-0.wwpcae"}]: dispatch
Dec 11 09:18:23 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.uiimcn", "id": "compute-2.uiimcn"} v 0)
Dec 11 09:18:23 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mgr metadata", "who": "compute-2.uiimcn", "id": "compute-2.uiimcn"}]: dispatch
Dec 11 09:18:23 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.unesvp", "id": "compute-1.unesvp"} v 0)
Dec 11 09:18:23 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mgr metadata", "who": "compute-1.unesvp", "id": "compute-1.unesvp"}]: dispatch
Dec 11 09:18:23 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 11 09:18:23 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 11 09:18:23 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 11 09:18:23 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 11 09:18:23 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 11 09:18:23 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 11 09:18:23 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Dec 11 09:18:23 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec 11 09:18:23 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).mds e9 all = 1
Dec 11 09:18:23 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec 11 09:18:23 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 11 09:18:23 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Dec 11 09:18:23 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec 11 09:18:23 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:23 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef4c002690 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: mgr load Constructed class from module: balancer
Dec 11 09:18:23 compute-0 ceph-mon[74426]: log_channel(cluster) log [INF] : Manager daemon compute-0.wwpcae is now available
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [balancer INFO root] Starting
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [balancer INFO root] Optimize plan auto_2025-12-11_09:18:23
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: mgr load Constructed class from module: cephadm
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: mgr load Constructed class from module: crash
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: mgr load Constructed class from module: dashboard
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: mgr load Constructed class from module: devicehealth
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO access_control] Loading user roles DB version=2
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO sso] Loading SSO DB version=1
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO root] Configured CherryPy, starting engine...
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [devicehealth INFO root] Starting
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: mgr load Constructed class from module: iostat
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: mgr load Constructed class from module: nfs
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: mgr load Constructed class from module: orchestrator
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: mgr load Constructed class from module: pg_autoscaler
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] _maybe_adjust
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: mgr load Constructed class from module: progress
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [prometheus DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:18:23 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Dec 11 09:18:23 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: mgr load Constructed class from module: prometheus
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [prometheus INFO root] server_addr: :: server_port: 9283
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [prometheus INFO root] Cache enabled
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [prometheus INFO root] starting metric collection thread
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [prometheus INFO root] Starting engine...
Dec 11 09:18:23 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: [11/Dec/2025:09:18:23] ENGINE Bus STARTING
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [prometheus INFO cherrypy.error] [11/Dec/2025:09:18:23] ENGINE Bus STARTING
Dec 11 09:18:23 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: CherryPy Checker:
Dec 11 09:18:23 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: The Application mounted at '' has an empty config.
Dec 11 09:18:23 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 
Dec 11 09:18:23 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 11 09:18:23 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [progress INFO root] Loading...
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7f09caff3430>, <progress.module.GhostEvent object at 0x7f09caff33d0>, <progress.module.GhostEvent object at 0x7f09caff33a0>, <progress.module.GhostEvent object at 0x7f09caffbdf0>, <progress.module.GhostEvent object at 0x7f09caffbe80>, <progress.module.GhostEvent object at 0x7f09caffe460>, <progress.module.GhostEvent object at 0x7f09caffe430>, <progress.module.GhostEvent object at 0x7f09caffe4c0>, <progress.module.GhostEvent object at 0x7f09caffe490>, <progress.module.GhostEvent object at 0x7f09caffe850>, <progress.module.GhostEvent object at 0x7f09caffe820>, <progress.module.GhostEvent object at 0x7f09caffe7f0>, <progress.module.GhostEvent object at 0x7f09caffe7c0>, <progress.module.GhostEvent object at 0x7f09caffe790>, <progress.module.GhostEvent object at 0x7f09caffe760>, <progress.module.GhostEvent object at 0x7f09caffe730>, <progress.module.GhostEvent object at 0x7f09caffe700>, <progress.module.GhostEvent object at 0x7f09caffe6d0>, <progress.module.GhostEvent object at 0x7f09caffe6a0>, <progress.module.GhostEvent object at 0x7f09caffe610>, <progress.module.GhostEvent object at 0x7f09caffe5e0>, <progress.module.GhostEvent object at 0x7f09caffe5b0>, <progress.module.GhostEvent object at 0x7f09caffe580>, <progress.module.GhostEvent object at 0x7f09caffe550>, <progress.module.GhostEvent object at 0x7f09caffe520>, <progress.module.GhostEvent object at 0x7f09caffe3d0>, <progress.module.GhostEvent object at 0x7f09caffe3a0>, <progress.module.GhostEvent object at 0x7f09caffe370>, <progress.module.GhostEvent object at 0x7f09caffe340>, <progress.module.GhostEvent object at 0x7f09caffe310>, <progress.module.GhostEvent object at 0x7f09caffe2e0>] historic events
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [progress INFO root] Loaded OSDMap, ready.
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [rbd_support INFO root] recovery thread starting
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [rbd_support INFO root] starting setup
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: mgr load Constructed class from module: rbd_support
Dec 11 09:18:23 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wwpcae/mirror_snapshot_schedule"} v 0)
Dec 11 09:18:23 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wwpcae/mirror_snapshot_schedule"}]: dispatch
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: mgr load Constructed class from module: restful
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: mgr load Constructed class from module: status
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: mgr load Constructed class from module: telemetry
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [restful INFO root] server_addr: :: server_port: 8003
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [restful WARNING root] server not running: no certificate configured
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [rbd_support INFO root] PerfHandler: starting
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_task_task: vms, start_after=
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_task_task: volumes, start_after=
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_task_task: backups, start_after=
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_task_task: images, start_after=
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [rbd_support INFO root] TaskHandler: starting
Dec 11 09:18:23 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wwpcae/trash_purge_schedule"} v 0)
Dec 11 09:18:23 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wwpcae/trash_purge_schedule"}]: dispatch
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [rbd_support INFO root] setup complete
Dec 11 09:18:23 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:23 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef70004440 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: mgr load Constructed class from module: volumes
Dec 11 09:18:23 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: [11/Dec/2025:09:18:23] ENGINE Serving on http://:::9283
Dec 11 09:18:23 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: [11/Dec/2025:09:18:23] ENGINE Bus STARTED
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [prometheus INFO cherrypy.error] [11/Dec/2025:09:18:23] ENGINE Serving on http://:::9283
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [prometheus INFO cherrypy.error] [11/Dec/2025:09:18:23] ENGINE Bus STARTED
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [prometheus INFO root] Engine started.
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: client.0 error registering admin socket command: (17) File exists
Dec 11 09:18:23 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:18:23.415+0000 7f09b5c3a640 -1 client.0 error registering admin socket command: (17) File exists
Dec 11 09:18:23 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:18:23.422+0000 7f09b2c34640 -1 client.0 error registering admin socket command: (17) File exists
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: client.0 error registering admin socket command: (17) File exists
Dec 11 09:18:23 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:18:23.422+0000 7f09b2c34640 -1 client.0 error registering admin socket command: (17) File exists
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: client.0 error registering admin socket command: (17) File exists
Dec 11 09:18:23 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:18:23.422+0000 7f09b2c34640 -1 client.0 error registering admin socket command: (17) File exists
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: client.0 error registering admin socket command: (17) File exists
Dec 11 09:18:23 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:18:23.422+0000 7f09b2c34640 -1 client.0 error registering admin socket command: (17) File exists
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: client.0 error registering admin socket command: (17) File exists
Dec 11 09:18:23 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: 2025-12-11T09:18:23.422+0000 7f09b2c34640 -1 client.0 error registering admin socket command: (17) File exists
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: client.0 error registering admin socket command: (17) File exists
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Dec 11 09:18:23 compute-0 sshd-session[98453]: Accepted publickey for ceph-admin from 192.168.122.100 port 45174 ssh2: RSA SHA256:VeIx2NZka5hi0niQjHCLie+FE2InrWghbFhBMbpMPGo
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Dec 11 09:18:23 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Dec 11 09:18:23 compute-0 systemd-logind[792]: New session 37 of user ceph-admin.
Dec 11 09:18:23 compute-0 systemd[1]: Started Session 37 of User ceph-admin.
Dec 11 09:18:23 compute-0 sshd-session[98453]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 11 09:18:23 compute-0 ceph-mon[74426]: mgrmap e31: compute-0.wwpcae(active, since 2m), standbys: compute-2.uiimcn, compute-1.unesvp
Dec 11 09:18:23 compute-0 ceph-mon[74426]: Standby manager daemon compute-2.uiimcn restarted
Dec 11 09:18:23 compute-0 ceph-mon[74426]: Standby manager daemon compute-2.uiimcn started
Dec 11 09:18:23 compute-0 ceph-mon[74426]: Active manager daemon compute-0.wwpcae restarted
Dec 11 09:18:23 compute-0 ceph-mon[74426]: Activating manager daemon compute-0.wwpcae
Dec 11 09:18:23 compute-0 ceph-mon[74426]: osdmap e113: 3 total, 3 up, 3 in
Dec 11 09:18:23 compute-0 ceph-mon[74426]: mgrmap e32: compute-0.wwpcae(active, starting, since 0.027586s), standbys: compute-2.uiimcn, compute-1.unesvp
Dec 11 09:18:23 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 11 09:18:23 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 11 09:18:23 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 11 09:18:23 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.ejykhm"}]: dispatch
Dec 11 09:18:23 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.abebdg"}]: dispatch
Dec 11 09:18:23 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.hifxsh"}]: dispatch
Dec 11 09:18:23 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mgr metadata", "who": "compute-0.wwpcae", "id": "compute-0.wwpcae"}]: dispatch
Dec 11 09:18:23 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mgr metadata", "who": "compute-2.uiimcn", "id": "compute-2.uiimcn"}]: dispatch
Dec 11 09:18:23 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mgr metadata", "who": "compute-1.unesvp", "id": "compute-1.unesvp"}]: dispatch
Dec 11 09:18:23 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 11 09:18:23 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 11 09:18:23 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 11 09:18:23 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec 11 09:18:23 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 11 09:18:23 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec 11 09:18:23 compute-0 ceph-mon[74426]: Manager daemon compute-0.wwpcae is now available
Dec 11 09:18:23 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:23 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 11 09:18:23 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wwpcae/mirror_snapshot_schedule"}]: dispatch
Dec 11 09:18:23 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wwpcae/trash_purge_schedule"}]: dispatch
Dec 11 09:18:24 compute-0 sudo[98463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:18:24 compute-0 sudo[98463]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:24 compute-0 sudo[98463]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:24 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : mgrmap e33: compute-0.wwpcae(active, since 1.0456s), standbys: compute-2.uiimcn, compute-1.unesvp
Dec 11 09:18:24 compute-0 ceph-mgr[74715]: [dashboard INFO dashboard.module] Engine started.
Dec 11 09:18:24 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v3: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:18:24 compute-0 sudo[98493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Dec 11 09:18:24 compute-0 sudo[98493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:24 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: ::ffff:192.168.122.100 - - [11/Dec/2025:09:18:24] "GET /metrics HTTP/1.1" 200 46583 "" "Prometheus/2.51.0"
Dec 11 09:18:24 compute-0 ceph-mgr[74715]: [prometheus INFO cherrypy.access.139680037156848] ::ffff:192.168.122.100 - - [11/Dec/2025:09:18:24] "GET /metrics HTTP/1.1" 200 46583 "" "Prometheus/2.51.0"
Dec 11 09:18:24 compute-0 ceph-mgr[74715]: [cephadm INFO cherrypy.error] [11/Dec/2025:09:18:24] ENGINE Bus STARTING
Dec 11 09:18:24 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : [11/Dec/2025:09:18:24] ENGINE Bus STARTING
Dec 11 09:18:24 compute-0 podman[98595]: 2025-12-11 09:18:24.741825119 +0000 UTC m=+0.066571244 container exec 9c08b5b1828392d2cc014af7ec9c415c9c915d4b6dd9798da9403e6929851bd4 (image=quay.io/ceph/ceph:v19, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 11 09:18:24 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:24 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef60003490 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:24 compute-0 ceph-mgr[74715]: [cephadm INFO cherrypy.error] [11/Dec/2025:09:18:24] ENGINE Serving on https://192.168.122.100:7150
Dec 11 09:18:24 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : [11/Dec/2025:09:18:24] ENGINE Serving on https://192.168.122.100:7150
Dec 11 09:18:24 compute-0 ceph-mgr[74715]: [cephadm INFO cherrypy.error] [11/Dec/2025:09:18:24] ENGINE Client ('192.168.122.100', 33772) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 11 09:18:24 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : [11/Dec/2025:09:18:24] ENGINE Client ('192.168.122.100', 33772) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 11 09:18:24 compute-0 podman[98595]: 2025-12-11 09:18:24.843239536 +0000 UTC m=+0.167985651 container exec_died 9c08b5b1828392d2cc014af7ec9c415c9c915d4b6dd9798da9403e6929851bd4 (image=quay.io/ceph/ceph:v19, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mon-compute-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:18:24 compute-0 ceph-mgr[74715]: [cephadm INFO cherrypy.error] [11/Dec/2025:09:18:24] ENGINE Serving on http://192.168.122.100:8765
Dec 11 09:18:24 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : [11/Dec/2025:09:18:24] ENGINE Serving on http://192.168.122.100:8765
Dec 11 09:18:24 compute-0 ceph-mgr[74715]: [cephadm INFO cherrypy.error] [11/Dec/2025:09:18:24] ENGINE Bus STARTED
Dec 11 09:18:24 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : [11/Dec/2025:09:18:24] ENGINE Bus STARTED
Dec 11 09:18:24 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:18:24 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000031s ======
Dec 11 09:18:24 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:18:24.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 11 09:18:24 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:18:24 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:24 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:18:24.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:25 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0)
Dec 11 09:18:25 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Dec 11 09:18:25 compute-0 ceph-mon[74426]: mgrmap e33: compute-0.wwpcae(active, since 1.0456s), standbys: compute-2.uiimcn, compute-1.unesvp
Dec 11 09:18:25 compute-0 ceph-mon[74426]: pgmap v3: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:18:25 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:25 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef500032f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:25 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v4: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:18:25 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : mgrmap e34: compute-0.wwpcae(active, since 2s), standbys: compute-2.uiimcn, compute-1.unesvp
Dec 11 09:18:25 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Dec 11 09:18:25 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Dec 11 09:18:25 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Dec 11 09:18:25 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Dec 11 09:18:25 compute-0 ceph-mgr[74715]: [devicehealth INFO root] Check health
Dec 11 09:18:25 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:25 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef4c002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:25 compute-0 podman[98746]: 2025-12-11 09:18:25.492038418 +0000 UTC m=+0.061482796 container exec 71c011c91ff6298c4d680d36893ef7d2b662adac0c087e7fdac0eacd23df3a9f (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 09:18:25 compute-0 podman[98746]: 2025-12-11 09:18:25.502363217 +0000 UTC m=+0.071807595 container exec_died 71c011c91ff6298c4d680d36893ef7d2b662adac0c087e7fdac0eacd23df3a9f (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 09:18:25 compute-0 podman[98837]: 2025-12-11 09:18:25.845577269 +0000 UTC m=+0.056801863 container exec b054262215ee01c3adca2deaedf76efed99f7a0a45370b1e29ea68ac174c96fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:18:25 compute-0 podman[98837]: 2025-12-11 09:18:25.860176949 +0000 UTC m=+0.071401543 container exec_died b054262215ee01c3adca2deaedf76efed99f7a0a45370b1e29ea68ac174c96fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:18:26 compute-0 podman[98900]: 2025-12-11 09:18:26.067245083 +0000 UTC m=+0.053363456 container exec 67bfd387bd504b410155709fb8f34d841ddb874b3571550320b1a27fbc3dea08 (image=quay.io/ceph/haproxy:2.3, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-haproxy-nfs-cephfs-compute-0-qtoxfz)
Dec 11 09:18:26 compute-0 ceph-mon[74426]: [11/Dec/2025:09:18:24] ENGINE Bus STARTING
Dec 11 09:18:26 compute-0 ceph-mon[74426]: [11/Dec/2025:09:18:24] ENGINE Serving on https://192.168.122.100:7150
Dec 11 09:18:26 compute-0 ceph-mon[74426]: [11/Dec/2025:09:18:24] ENGINE Client ('192.168.122.100', 33772) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 11 09:18:26 compute-0 ceph-mon[74426]: [11/Dec/2025:09:18:24] ENGINE Serving on http://192.168.122.100:8765
Dec 11 09:18:26 compute-0 ceph-mon[74426]: [11/Dec/2025:09:18:24] ENGINE Bus STARTED
Dec 11 09:18:26 compute-0 ceph-mon[74426]: pgmap v4: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:18:26 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Dec 11 09:18:26 compute-0 ceph-mon[74426]: mgrmap e34: compute-0.wwpcae(active, since 2s), standbys: compute-2.uiimcn, compute-1.unesvp
Dec 11 09:18:26 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Dec 11 09:18:26 compute-0 ceph-mon[74426]: osdmap e114: 3 total, 3 up, 3 in
Dec 11 09:18:26 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 11 09:18:26 compute-0 podman[98900]: 2025-12-11 09:18:26.108693391 +0000 UTC m=+0.094811744 container exec_died 67bfd387bd504b410155709fb8f34d841ddb874b3571550320b1a27fbc3dea08 (image=quay.io/ceph/haproxy:2.3, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-haproxy-nfs-cephfs-compute-0-qtoxfz)
Dec 11 09:18:26 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:26 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 11 09:18:26 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:26 compute-0 sudo[98969]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uiazikmnhqotmcgjjvgmqhhimhigthqa ; /usr/bin/python3'
Dec 11 09:18:26 compute-0 sudo[98969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:18:26 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:18:26 compute-0 podman[98989]: 2025-12-11 09:18:26.367664625 +0000 UTC m=+0.082743952 container exec 2552e27573042b86043f6eb7d85be1b045b99c64a99b48fb64027534fb48d8b4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-keepalived-nfs-cephfs-compute-0-ewssxv, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=2.2.4, distribution-scope=public, io.openshift.tags=Ceph keepalived, vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., release=1793, name=keepalived, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, description=keepalived for Ceph, io.buildah.version=1.28.2)
Dec 11 09:18:26 compute-0 python3[98974]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps --daemon-type rgw --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:18:26 compute-0 podman[98989]: 2025-12-11 09:18:26.385905448 +0000 UTC m=+0.100984785 container exec_died 2552e27573042b86043f6eb7d85be1b045b99c64a99b48fb64027534fb48d8b4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-keepalived-nfs-cephfs-compute-0-ewssxv, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, vcs-type=git, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., architecture=x86_64, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec 11 09:18:26 compute-0 podman[99008]: 2025-12-11 09:18:26.452962465 +0000 UTC m=+0.056795892 container create 46ceca9a808379ea5045c8079974d044e9ee131bd6ef50a39650145b7e401c1c (image=quay.io/ceph/ceph:v19, name=silly_hellman, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:18:26 compute-0 systemd[1]: Started libpod-conmon-46ceca9a808379ea5045c8079974d044e9ee131bd6ef50a39650145b7e401c1c.scope.
Dec 11 09:18:26 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:18:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5611a3671c5b2da56b20b5f00586423f6a95cb359eb1205a1b12985e17b4ed3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:18:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5611a3671c5b2da56b20b5f00586423f6a95cb359eb1205a1b12985e17b4ed3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:18:26 compute-0 podman[99008]: 2025-12-11 09:18:26.429845522 +0000 UTC m=+0.033678969 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:18:26 compute-0 podman[99008]: 2025-12-11 09:18:26.537826312 +0000 UTC m=+0.141659759 container init 46ceca9a808379ea5045c8079974d044e9ee131bd6ef50a39650145b7e401c1c (image=quay.io/ceph/ceph:v19, name=silly_hellman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:18:26 compute-0 podman[99008]: 2025-12-11 09:18:26.545269361 +0000 UTC m=+0.149102788 container start 46ceca9a808379ea5045c8079974d044e9ee131bd6ef50a39650145b7e401c1c (image=quay.io/ceph/ceph:v19, name=silly_hellman, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec 11 09:18:26 compute-0 podman[99008]: 2025-12-11 09:18:26.548972925 +0000 UTC m=+0.152806352 container attach 46ceca9a808379ea5045c8079974d044e9ee131bd6ef50a39650145b7e401c1c (image=quay.io/ceph/ceph:v19, name=silly_hellman, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:18:26 compute-0 silly_hellman[99048]: ERROR: invalid flag --daemon-type
Dec 11 09:18:26 compute-0 systemd[1]: libpod-46ceca9a808379ea5045c8079974d044e9ee131bd6ef50a39650145b7e401c1c.scope: Deactivated successfully.
Dec 11 09:18:26 compute-0 conmon[99048]: conmon 46ceca9a808379ea5045 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-46ceca9a808379ea5045c8079974d044e9ee131bd6ef50a39650145b7e401c1c.scope/container/memory.events
Dec 11 09:18:26 compute-0 podman[99008]: 2025-12-11 09:18:26.612952868 +0000 UTC m=+0.216786305 container died 46ceca9a808379ea5045c8079974d044e9ee131bd6ef50a39650145b7e401c1c (image=quay.io/ceph/ceph:v19, name=silly_hellman, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Dec 11 09:18:26 compute-0 podman[99063]: 2025-12-11 09:18:26.638763744 +0000 UTC m=+0.065129409 container exec f6be69c7d77625e49bceb27822a33c1c522509b882e158ee75cfb836ad9761a2 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 09:18:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-f5611a3671c5b2da56b20b5f00586423f6a95cb359eb1205a1b12985e17b4ed3-merged.mount: Deactivated successfully.
Dec 11 09:18:26 compute-0 podman[99008]: 2025-12-11 09:18:26.663634421 +0000 UTC m=+0.267467838 container remove 46ceca9a808379ea5045c8079974d044e9ee131bd6ef50a39650145b7e401c1c (image=quay.io/ceph/ceph:v19, name=silly_hellman, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 11 09:18:26 compute-0 systemd[1]: libpod-conmon-46ceca9a808379ea5045c8079974d044e9ee131bd6ef50a39650145b7e401c1c.scope: Deactivated successfully.
Dec 11 09:18:26 compute-0 sudo[98969]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:26 compute-0 podman[99063]: 2025-12-11 09:18:26.709430292 +0000 UTC m=+0.135795947 container exec_died f6be69c7d77625e49bceb27822a33c1c522509b882e158ee75cfb836ad9761a2 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 09:18:26 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:26 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef70004440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:26 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:18:26 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:26 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:18:26.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:26 compute-0 podman[99159]: 2025-12-11 09:18:26.955124648 +0000 UTC m=+0.063393627 container exec 3ec8d07fc34e7ed8ee0d4326c60dde172854d4253d34e863a630ec7b70332f71 (image=quay.io/ceph/grafana:10.4.0, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 11 09:18:26 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:18:26 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:26 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:18:26.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:27 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 11 09:18:27 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:27 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 11 09:18:27 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:27 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v6: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:18:27 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0)
Dec 11 09:18:27 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Dec 11 09:18:27 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:27 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef60003490 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:27 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:27 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:27 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:27 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:27 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Dec 11 09:18:27 compute-0 podman[99159]: 2025-12-11 09:18:27.173791219 +0000 UTC m=+0.282060208 container exec_died 3ec8d07fc34e7ed8ee0d4326c60dde172854d4253d34e863a630ec7b70332f71 (image=quay.io/ceph/grafana:10.4.0, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 11 09:18:27 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 11 09:18:27 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:27 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 11 09:18:27 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:27 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Dec 11 09:18:27 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 11 09:18:27 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Dec 11 09:18:27 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:27 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef500032f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:27 compute-0 podman[99273]: 2025-12-11 09:18:27.547388867 +0000 UTC m=+0.054570703 container exec 90c28ad7bc328840534f9285d34005394d82c163694e46ece6c6c1f2d03e7fe2 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 09:18:27 compute-0 podman[99273]: 2025-12-11 09:18:27.595707997 +0000 UTC m=+0.102889803 container exec_died 90c28ad7bc328840534f9285d34005394d82c163694e46ece6c6c1f2d03e7fe2 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 09:18:27 compute-0 sudo[98493]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:27 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 11 09:18:27 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:27 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 11 09:18:27 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:27 compute-0 sudo[99316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:18:27 compute-0 sudo[99316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:27 compute-0 sudo[99316]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:27 compute-0 sudo[99341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 11 09:18:27 compute-0 sudo[99341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:28 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Dec 11 09:18:28 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Dec 11 09:18:28 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Dec 11 09:18:28 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Dec 11 09:18:28 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : mgrmap e35: compute-0.wwpcae(active, since 5s), standbys: compute-2.uiimcn, compute-1.unesvp
Dec 11 09:18:28 compute-0 ceph-mon[74426]: pgmap v6: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:18:28 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:28 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:28 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 11 09:18:28 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Dec 11 09:18:28 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:28 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:28 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Dec 11 09:18:28 compute-0 ceph-mon[74426]: osdmap e115: 3 total, 3 up, 3 in
Dec 11 09:18:28 compute-0 ceph-mon[74426]: mgrmap e35: compute-0.wwpcae(active, since 5s), standbys: compute-2.uiimcn, compute-1.unesvp
Dec 11 09:18:28 compute-0 sudo[99341]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:28 compute-0 sudo[99397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:18:28 compute-0 sudo[99397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:28 compute-0 sudo[99397]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:28 compute-0 sudo[99422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Dec 11 09:18:28 compute-0 sudo[99422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:28 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 11 09:18:28 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:28 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 11 09:18:28 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:28 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Dec 11 09:18:28 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 11 09:18:28 compute-0 sudo[99422]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:28 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 11 09:18:28 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:28 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef4c002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:28 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:28 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 115 pg[10.12( v 56'1088 (0'0,56'1088] local-lis/les=70/71 n=4 ec=61/50 lis/c=70/70 les/c/f=71/71/0 sis=115 pruub=8.359090805s) [2] r=-1 lpr=115 pi=[70,115)/1 crt=56'1088 mlcod 0'0 active pruub 258.337158203s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:18:28 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 115 pg[10.12( v 56'1088 (0'0,56'1088] local-lis/les=70/71 n=4 ec=61/50 lis/c=70/70 les/c/f=71/71/0 sis=115 pruub=8.358860016s) [2] r=-1 lpr=115 pi=[70,115)/1 crt=56'1088 mlcod 0'0 unknown NOTIFY pruub 258.337158203s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:18:28 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 11 09:18:28 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:28 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec 11 09:18:28 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 11 09:18:28 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 11 09:18:28 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:18:28 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 11 09:18:28 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 11 09:18:28 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Dec 11 09:18:28 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Dec 11 09:18:28 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Dec 11 09:18:28 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Dec 11 09:18:28 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Dec 11 09:18:28 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Dec 11 09:18:28 compute-0 sudo[99466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 11 09:18:28 compute-0 sudo[99466]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:28 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:18:28 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:28 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:18:28.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:28 compute-0 sudo[99466]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:28 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:18:28 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:28 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:18:28.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:29 compute-0 sudo[99492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph
Dec 11 09:18:29 compute-0 sudo[99492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:29 compute-0 sudo[99492]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:29 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v8: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:18:29 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0)
Dec 11 09:18:29 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Dec 11 09:18:29 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:29 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef70004440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:29 compute-0 sudo[99517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.conf.new
Dec 11 09:18:29 compute-0 sudo[99517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:29 compute-0 sudo[99517]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:29 compute-0 sudo[99542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:18:29 compute-0 sudo[99542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:29 compute-0 sudo[99542]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:29 compute-0 sudo[99567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.conf.new
Dec 11 09:18:29 compute-0 sudo[99567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:29 compute-0 sudo[99567]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:29 compute-0 sudo[99615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.conf.new
Dec 11 09:18:29 compute-0 sudo[99615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:29 compute-0 sudo[99615]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:29 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:29 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef60003490 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:29 compute-0 sudo[99640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.conf.new
Dec 11 09:18:29 compute-0 sudo[99640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:29 compute-0 sudo[99640]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:29 compute-0 sudo[99665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 11 09:18:29 compute-0 sudo[99665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:29 compute-0 sudo[99665]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:29 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf
Dec 11 09:18:29 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf
Dec 11 09:18:29 compute-0 sudo[99690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config
Dec 11 09:18:29 compute-0 sudo[99690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:29 compute-0 sudo[99690]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:29 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf
Dec 11 09:18:29 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf
Dec 11 09:18:29 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:29 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:29 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 11 09:18:29 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:29 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:29 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 11 09:18:29 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:18:29 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 11 09:18:29 compute-0 ceph-mon[74426]: Updating compute-0:/etc/ceph/ceph.conf
Dec 11 09:18:29 compute-0 ceph-mon[74426]: Updating compute-1:/etc/ceph/ceph.conf
Dec 11 09:18:29 compute-0 ceph-mon[74426]: Updating compute-2:/etc/ceph/ceph.conf
Dec 11 09:18:29 compute-0 ceph-mon[74426]: pgmap v8: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:18:29 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Dec 11 09:18:29 compute-0 sudo[99715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config
Dec 11 09:18:29 compute-0 sudo[99715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:29 compute-0 sudo[99715]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:29 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf
Dec 11 09:18:29 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf
Dec 11 09:18:29 compute-0 sudo[99740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf.new
Dec 11 09:18:29 compute-0 sudo[99740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:29 compute-0 sudo[99740]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:29 compute-0 sudo[99765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:18:29 compute-0 sudo[99765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:29 compute-0 sudo[99765]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:29 compute-0 sudo[99790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf.new
Dec 11 09:18:29 compute-0 sudo[99790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:29 compute-0 sudo[99790]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:29 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Dec 11 09:18:29 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Dec 11 09:18:29 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Dec 11 09:18:29 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Dec 11 09:18:29 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 116 pg[10.12( v 56'1088 (0'0,56'1088] local-lis/les=70/71 n=4 ec=61/50 lis/c=70/70 les/c/f=71/71/0 sis=116) [2]/[1] r=0 lpr=116 pi=[70,116)/1 crt=56'1088 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:18:29 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 116 pg[10.12( v 56'1088 (0'0,56'1088] local-lis/les=70/71 n=4 ec=61/50 lis/c=70/70 les/c/f=71/71/0 sis=116) [2]/[1] r=0 lpr=116 pi=[70,116)/1 crt=56'1088 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 11 09:18:29 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 116 pg[10.13( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=68/68 les/c/f=69/69/0 sis=116) [1] r=0 lpr=116 pi=[68,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:18:29 compute-0 sudo[99838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf.new
Dec 11 09:18:29 compute-0 sudo[99838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:30 compute-0 sudo[99838]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:30 compute-0 sudo[99863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf.new
Dec 11 09:18:30 compute-0 sudo[99863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:30 compute-0 sudo[99863]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:30 compute-0 sudo[99888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf.new /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf
Dec 11 09:18:30 compute-0 sudo[99888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:30 compute-0 sudo[99888]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:30 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 11 09:18:30 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 11 09:18:30 compute-0 sudo[99913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 11 09:18:30 compute-0 sudo[99913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:30 compute-0 sudo[99913]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:30 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec 11 09:18:30 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec 11 09:18:30 compute-0 sudo[99938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph
Dec 11 09:18:30 compute-0 sudo[99938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:30 compute-0 sudo[99938]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:30 compute-0 sudo[99963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.client.admin.keyring.new
Dec 11 09:18:30 compute-0 sudo[99963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:30 compute-0 sudo[99963]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:30 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec 11 09:18:30 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec 11 09:18:30 compute-0 sudo[99988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:18:30 compute-0 sudo[99988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:30 compute-0 sudo[99988]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:30 compute-0 sudo[100013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.client.admin.keyring.new
Dec 11 09:18:30 compute-0 sudo[100013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:30 compute-0 sudo[100013]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:30 compute-0 sudo[100061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.client.admin.keyring.new
Dec 11 09:18:30 compute-0 sudo[100061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:30 compute-0 sudo[100061]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:30 compute-0 sudo[100086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.client.admin.keyring.new
Dec 11 09:18:30 compute-0 sudo[100086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:30 compute-0 sudo[100086]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:30 compute-0 sudo[100112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Dec 11 09:18:30 compute-0 sudo[100112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:30 compute-0 sudo[100112]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:30 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring
Dec 11 09:18:30 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring
Dec 11 09:18:30 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:30 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef50003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:30 compute-0 sudo[100137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config
Dec 11 09:18:30 compute-0 sudo[100137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:30 compute-0 sudo[100137]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:30 compute-0 sudo[100162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config
Dec 11 09:18:30 compute-0 sudo[100162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:30 compute-0 sudo[100162]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:30 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Dec 11 09:18:30 compute-0 ceph-mon[74426]: Updating compute-0:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf
Dec 11 09:18:30 compute-0 ceph-mon[74426]: Updating compute-1:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf
Dec 11 09:18:30 compute-0 ceph-mon[74426]: Updating compute-2:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf
Dec 11 09:18:30 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Dec 11 09:18:30 compute-0 ceph-mon[74426]: osdmap e116: 3 total, 3 up, 3 in
Dec 11 09:18:30 compute-0 ceph-mon[74426]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 11 09:18:30 compute-0 ceph-mon[74426]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec 11 09:18:30 compute-0 ceph-mon[74426]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec 11 09:18:30 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring
Dec 11 09:18:30 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring
Dec 11 09:18:30 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Dec 11 09:18:30 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Dec 11 09:18:30 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 117 pg[10.13( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=68/68 les/c/f=69/69/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[68,117)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:18:30 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 117 pg[10.13( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=68/68 les/c/f=69/69/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[68,117)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 11 09:18:30 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 117 pg[10.12( v 56'1088 (0'0,56'1088] local-lis/les=116/117 n=4 ec=61/50 lis/c=70/70 les/c/f=71/71/0 sis=116) [2]/[1] async=[2] r=0 lpr=116 pi=[70,116)/1 crt=56'1088 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:18:30 compute-0 sudo[100187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring.new
Dec 11 09:18:30 compute-0 sudo[100187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:30 compute-0 sudo[100187]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:30 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:18:30 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:30 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:18:30.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:30 compute-0 sudo[100212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:18:30 compute-0 sudo[100212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:30 compute-0 sudo[100212]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:30 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:18:30 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000031s ======
Dec 11 09:18:30 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:18:30.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 11 09:18:31 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring
Dec 11 09:18:31 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring
Dec 11 09:18:31 compute-0 sudo[100238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring.new
Dec 11 09:18:31 compute-0 sudo[100238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:31 compute-0 sudo[100238]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:31 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v11: 353 pgs: 1 unknown, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 0 B/s wr, 12 op/s
Dec 11 09:18:31 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:31 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef50003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:31 compute-0 sudo[100286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring.new
Dec 11 09:18:31 compute-0 sudo[100286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:31 compute-0 sudo[100286]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:31 compute-0 sudo[100311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring.new
Dec 11 09:18:31 compute-0 sudo[100311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:31 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:18:31 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Dec 11 09:18:31 compute-0 sudo[100311]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:31 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Dec 11 09:18:31 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Dec 11 09:18:31 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 118 pg[10.12( v 56'1088 (0'0,56'1088] local-lis/les=116/117 n=4 ec=61/50 lis/c=116/70 les/c/f=117/71/0 sis=118 pruub=15.653422356s) [2] async=[2] r=-1 lpr=118 pi=[70,118)/1 crt=56'1088 mlcod 56'1088 active pruub 268.031005859s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:18:31 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 118 pg[10.12( v 56'1088 (0'0,56'1088] local-lis/les=116/117 n=4 ec=61/50 lis/c=116/70 les/c/f=117/71/0 sis=118 pruub=15.653338432s) [2] r=-1 lpr=118 pi=[70,118)/1 crt=56'1088 mlcod 0'0 unknown NOTIFY pruub 268.031005859s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:18:31 compute-0 sudo[100336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring.new /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring
Dec 11 09:18:31 compute-0 sudo[100336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:31 compute-0 sudo[100336]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:31 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 11 09:18:31 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:31 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 11 09:18:31 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:31 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:31 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef70004440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:31 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 11 09:18:31 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:31 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 11 09:18:31 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:31 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 11 09:18:31 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:31 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 11 09:18:31 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:31 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 11 09:18:31 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:31 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 11 09:18:31 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:31 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 11 09:18:31 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 11 09:18:31 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 11 09:18:31 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 11 09:18:31 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 11 09:18:31 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:18:31 compute-0 sudo[100361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:18:31 compute-0 sudo[100361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:31 compute-0 sudo[100361]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:31 compute-0 ceph-mon[74426]: Updating compute-0:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring
Dec 11 09:18:31 compute-0 ceph-mon[74426]: Updating compute-1:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring
Dec 11 09:18:31 compute-0 ceph-mon[74426]: osdmap e117: 3 total, 3 up, 3 in
Dec 11 09:18:31 compute-0 ceph-mon[74426]: Updating compute-2:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring
Dec 11 09:18:31 compute-0 ceph-mon[74426]: pgmap v11: 353 pgs: 1 unknown, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 0 B/s wr, 12 op/s
Dec 11 09:18:31 compute-0 ceph-mon[74426]: osdmap e118: 3 total, 3 up, 3 in
Dec 11 09:18:31 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:31 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:31 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:31 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:31 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:31 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:31 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:31 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:31 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 11 09:18:31 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 11 09:18:31 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:18:31 compute-0 sudo[100386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 11 09:18:31 compute-0 sudo[100386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:32 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Dec 11 09:18:32 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Dec 11 09:18:32 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Dec 11 09:18:32 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 119 pg[10.13( v 56'1088 (0'0,56'1088] local-lis/les=0/0 n=5 ec=61/50 lis/c=117/68 les/c/f=118/69/0 sis=119) [1] r=0 lpr=119 pi=[68,119)/1 luod=0'0 crt=56'1088 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:18:32 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 119 pg[10.13( v 56'1088 (0'0,56'1088] local-lis/les=0/0 n=5 ec=61/50 lis/c=117/68 les/c/f=118/69/0 sis=119) [1] r=0 lpr=119 pi=[68,119)/1 crt=56'1088 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:18:32 compute-0 podman[100453]: 2025-12-11 09:18:32.348233352 +0000 UTC m=+0.048453944 container create 253da96cf0b851cae77f311f8cb32aabf31daaae5d02392c600c4ca2dd762434 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_johnson, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 11 09:18:32 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-haproxy-nfs-cephfs-compute-0-qtoxfz[95274]: [WARNING] 344/091832 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 11 09:18:32 compute-0 systemd[1]: Started libpod-conmon-253da96cf0b851cae77f311f8cb32aabf31daaae5d02392c600c4ca2dd762434.scope.
Dec 11 09:18:32 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:18:32 compute-0 podman[100453]: 2025-12-11 09:18:32.329017619 +0000 UTC m=+0.029238241 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:18:32 compute-0 podman[100453]: 2025-12-11 09:18:32.427225307 +0000 UTC m=+0.127445909 container init 253da96cf0b851cae77f311f8cb32aabf31daaae5d02392c600c4ca2dd762434 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_johnson, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 11 09:18:32 compute-0 podman[100453]: 2025-12-11 09:18:32.435077009 +0000 UTC m=+0.135297601 container start 253da96cf0b851cae77f311f8cb32aabf31daaae5d02392c600c4ca2dd762434 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_johnson, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 11 09:18:32 compute-0 podman[100453]: 2025-12-11 09:18:32.439229728 +0000 UTC m=+0.139450350 container attach 253da96cf0b851cae77f311f8cb32aabf31daaae5d02392c600c4ca2dd762434 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 11 09:18:32 compute-0 trusting_johnson[100469]: 167 167
Dec 11 09:18:32 compute-0 systemd[1]: libpod-253da96cf0b851cae77f311f8cb32aabf31daaae5d02392c600c4ca2dd762434.scope: Deactivated successfully.
Dec 11 09:18:32 compute-0 podman[100453]: 2025-12-11 09:18:32.441530979 +0000 UTC m=+0.141751571 container died 253da96cf0b851cae77f311f8cb32aabf31daaae5d02392c600c4ca2dd762434 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_johnson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 11 09:18:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-91376ec3a1cffca33dc063d19c399bbe3049abc2864ae88b0bb416b749f67daa-merged.mount: Deactivated successfully.
Dec 11 09:18:32 compute-0 podman[100453]: 2025-12-11 09:18:32.484689569 +0000 UTC m=+0.184910161 container remove 253da96cf0b851cae77f311f8cb32aabf31daaae5d02392c600c4ca2dd762434 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:18:32 compute-0 systemd[1]: libpod-conmon-253da96cf0b851cae77f311f8cb32aabf31daaae5d02392c600c4ca2dd762434.scope: Deactivated successfully.
Dec 11 09:18:32 compute-0 podman[100491]: 2025-12-11 09:18:32.659382925 +0000 UTC m=+0.053937154 container create fb27fd37d329be5ff159c5d6ab4450967b9f06045054f444335e8f56ffc4dd95 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_gauss, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 11 09:18:32 compute-0 systemd[1]: Started libpod-conmon-fb27fd37d329be5ff159c5d6ab4450967b9f06045054f444335e8f56ffc4dd95.scope.
Dec 11 09:18:32 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:18:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2bd67d1b89c1bc2416a6789c6d747a0f1826ad3e4d0757ae1b8731a8416e700/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 11 09:18:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2bd67d1b89c1bc2416a6789c6d747a0f1826ad3e4d0757ae1b8731a8416e700/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:18:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2bd67d1b89c1bc2416a6789c6d747a0f1826ad3e4d0757ae1b8731a8416e700/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:18:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2bd67d1b89c1bc2416a6789c6d747a0f1826ad3e4d0757ae1b8731a8416e700/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 11 09:18:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2bd67d1b89c1bc2416a6789c6d747a0f1826ad3e4d0757ae1b8731a8416e700/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 11 09:18:32 compute-0 podman[100491]: 2025-12-11 09:18:32.733527421 +0000 UTC m=+0.128081650 container init fb27fd37d329be5ff159c5d6ab4450967b9f06045054f444335e8f56ffc4dd95 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 11 09:18:32 compute-0 podman[100491]: 2025-12-11 09:18:32.642022059 +0000 UTC m=+0.036576308 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:18:32 compute-0 podman[100491]: 2025-12-11 09:18:32.743660503 +0000 UTC m=+0.138214732 container start fb27fd37d329be5ff159c5d6ab4450967b9f06045054f444335e8f56ffc4dd95 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_gauss, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 11 09:18:32 compute-0 podman[100491]: 2025-12-11 09:18:32.74842332 +0000 UTC m=+0.142977549 container attach fb27fd37d329be5ff159c5d6ab4450967b9f06045054f444335e8f56ffc4dd95 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_gauss, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:18:32 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:32 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef60003490 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:32 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-haproxy-nfs-cephfs-compute-0-qtoxfz[95274]: [WARNING] 344/091832 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 11 09:18:32 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:18:32 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:32 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:18:32.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:32 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:18:32 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000031s ======
Dec 11 09:18:32 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:18:32.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 11 09:18:33 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v14: 353 pgs: 1 unknown, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 18 op/s
Dec 11 09:18:33 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:33 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef4c002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:33 compute-0 wonderful_gauss[100508]: --> passed data devices: 0 physical, 1 LVM
Dec 11 09:18:33 compute-0 wonderful_gauss[100508]: --> All data devices are unavailable
Dec 11 09:18:33 compute-0 systemd[1]: libpod-fb27fd37d329be5ff159c5d6ab4450967b9f06045054f444335e8f56ffc4dd95.scope: Deactivated successfully.
Dec 11 09:18:33 compute-0 podman[100491]: 2025-12-11 09:18:33.139509538 +0000 UTC m=+0.534063777 container died fb27fd37d329be5ff159c5d6ab4450967b9f06045054f444335e8f56ffc4dd95 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_gauss, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325)
Dec 11 09:18:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-b2bd67d1b89c1bc2416a6789c6d747a0f1826ad3e4d0757ae1b8731a8416e700-merged.mount: Deactivated successfully.
Dec 11 09:18:33 compute-0 podman[100491]: 2025-12-11 09:18:33.182614407 +0000 UTC m=+0.577168636 container remove fb27fd37d329be5ff159c5d6ab4450967b9f06045054f444335e8f56ffc4dd95 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_gauss, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:18:33 compute-0 systemd[1]: libpod-conmon-fb27fd37d329be5ff159c5d6ab4450967b9f06045054f444335e8f56ffc4dd95.scope: Deactivated successfully.
Dec 11 09:18:33 compute-0 sudo[100386]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:33 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Dec 11 09:18:33 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Dec 11 09:18:33 compute-0 ceph-mon[74426]: osdmap e119: 3 total, 3 up, 3 in
Dec 11 09:18:33 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Dec 11 09:18:33 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 120 pg[10.13( v 56'1088 (0'0,56'1088] local-lis/les=119/120 n=5 ec=61/50 lis/c=117/68 les/c/f=118/69/0 sis=119) [1] r=0 lpr=119 pi=[68,119)/1 crt=56'1088 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:18:33 compute-0 sudo[100535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:18:33 compute-0 sudo[100535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:33 compute-0 sudo[100535]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:33 compute-0 sudo[100560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -- lvm list --format json
Dec 11 09:18:33 compute-0 sudo[100560]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:33 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:33 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef50003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:33 compute-0 podman[100625]: 2025-12-11 09:18:33.801930171 +0000 UTC m=+0.039812099 container create 31922afaef9c86f5e4251c37ab5b8cbd6d4f117cfd881882cde75e0cf168573e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_wozniak, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:18:33 compute-0 systemd[1]: Started libpod-conmon-31922afaef9c86f5e4251c37ab5b8cbd6d4f117cfd881882cde75e0cf168573e.scope.
Dec 11 09:18:33 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:18:33 compute-0 podman[100625]: 2025-12-11 09:18:33.874985333 +0000 UTC m=+0.112867291 container init 31922afaef9c86f5e4251c37ab5b8cbd6d4f117cfd881882cde75e0cf168573e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_wozniak, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec 11 09:18:33 compute-0 podman[100625]: 2025-12-11 09:18:33.785516225 +0000 UTC m=+0.023398183 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:18:33 compute-0 podman[100625]: 2025-12-11 09:18:33.881549756 +0000 UTC m=+0.119431694 container start 31922afaef9c86f5e4251c37ab5b8cbd6d4f117cfd881882cde75e0cf168573e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_wozniak, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:18:33 compute-0 podman[100625]: 2025-12-11 09:18:33.885101005 +0000 UTC m=+0.122982943 container attach 31922afaef9c86f5e4251c37ab5b8cbd6d4f117cfd881882cde75e0cf168573e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_wozniak, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:18:33 compute-0 musing_wozniak[100641]: 167 167
Dec 11 09:18:33 compute-0 systemd[1]: libpod-31922afaef9c86f5e4251c37ab5b8cbd6d4f117cfd881882cde75e0cf168573e.scope: Deactivated successfully.
Dec 11 09:18:33 compute-0 podman[100625]: 2025-12-11 09:18:33.88786799 +0000 UTC m=+0.125749958 container died 31922afaef9c86f5e4251c37ab5b8cbd6d4f117cfd881882cde75e0cf168573e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 11 09:18:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-c5438723f3fb475f9159cff189fdc6ba8c9e0e5c0eb554f9378767e0bb55acc3-merged.mount: Deactivated successfully.
Dec 11 09:18:33 compute-0 podman[100625]: 2025-12-11 09:18:33.92385906 +0000 UTC m=+0.161740998 container remove 31922afaef9c86f5e4251c37ab5b8cbd6d4f117cfd881882cde75e0cf168573e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:18:33 compute-0 systemd[1]: libpod-conmon-31922afaef9c86f5e4251c37ab5b8cbd6d4f117cfd881882cde75e0cf168573e.scope: Deactivated successfully.
Dec 11 09:18:34 compute-0 podman[100664]: 2025-12-11 09:18:34.072843063 +0000 UTC m=+0.043310695 container create 5c0de94d132d09a2093852e36182d6e50a1b05dfdd123f76e0bf76acc377db9b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 11 09:18:34 compute-0 systemd[1]: Started libpod-conmon-5c0de94d132d09a2093852e36182d6e50a1b05dfdd123f76e0bf76acc377db9b.scope.
Dec 11 09:18:34 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:18:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5174ca156906a077c075ecb3540e6fe53e821ec6d88ba582fd68ef097fcf6507/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 11 09:18:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5174ca156906a077c075ecb3540e6fe53e821ec6d88ba582fd68ef097fcf6507/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:18:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5174ca156906a077c075ecb3540e6fe53e821ec6d88ba582fd68ef097fcf6507/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:18:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5174ca156906a077c075ecb3540e6fe53e821ec6d88ba582fd68ef097fcf6507/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 11 09:18:34 compute-0 podman[100664]: 2025-12-11 09:18:34.054514249 +0000 UTC m=+0.024981901 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:18:34 compute-0 podman[100664]: 2025-12-11 09:18:34.154591474 +0000 UTC m=+0.125059126 container init 5c0de94d132d09a2093852e36182d6e50a1b05dfdd123f76e0bf76acc377db9b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_herschel, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:18:34 compute-0 podman[100664]: 2025-12-11 09:18:34.162277001 +0000 UTC m=+0.132744633 container start 5c0de94d132d09a2093852e36182d6e50a1b05dfdd123f76e0bf76acc377db9b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_herschel, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 11 09:18:34 compute-0 podman[100664]: 2025-12-11 09:18:34.165754248 +0000 UTC m=+0.136221890 container attach 5c0de94d132d09a2093852e36182d6e50a1b05dfdd123f76e0bf76acc377db9b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_herschel, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 11 09:18:34 compute-0 ceph-mon[74426]: pgmap v14: 353 pgs: 1 unknown, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 18 op/s
Dec 11 09:18:34 compute-0 ceph-mon[74426]: osdmap e120: 3 total, 3 up, 3 in
Dec 11 09:18:34 compute-0 quirky_herschel[100681]: {
Dec 11 09:18:34 compute-0 quirky_herschel[100681]:     "1": [
Dec 11 09:18:34 compute-0 quirky_herschel[100681]:         {
Dec 11 09:18:34 compute-0 quirky_herschel[100681]:             "devices": [
Dec 11 09:18:34 compute-0 quirky_herschel[100681]:                 "/dev/loop3"
Dec 11 09:18:34 compute-0 quirky_herschel[100681]:             ],
Dec 11 09:18:34 compute-0 quirky_herschel[100681]:             "lv_name": "ceph_lv0",
Dec 11 09:18:34 compute-0 quirky_herschel[100681]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 11 09:18:34 compute-0 quirky_herschel[100681]:             "lv_size": "21470642176",
Dec 11 09:18:34 compute-0 quirky_herschel[100681]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TIjPv6-JN4c-U1Hf-F3k1-2mgj-CwGi-pbjIHh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=865a308b-cdc3-4034-b5eb-feb596b462bf,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 11 09:18:34 compute-0 quirky_herschel[100681]:             "lv_uuid": "TIjPv6-JN4c-U1Hf-F3k1-2mgj-CwGi-pbjIHh",
Dec 11 09:18:34 compute-0 quirky_herschel[100681]:             "name": "ceph_lv0",
Dec 11 09:18:34 compute-0 quirky_herschel[100681]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 11 09:18:34 compute-0 quirky_herschel[100681]:             "tags": {
Dec 11 09:18:34 compute-0 quirky_herschel[100681]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 11 09:18:34 compute-0 quirky_herschel[100681]:                 "ceph.block_uuid": "TIjPv6-JN4c-U1Hf-F3k1-2mgj-CwGi-pbjIHh",
Dec 11 09:18:34 compute-0 quirky_herschel[100681]:                 "ceph.cephx_lockbox_secret": "",
Dec 11 09:18:34 compute-0 quirky_herschel[100681]:                 "ceph.cluster_fsid": "31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060",
Dec 11 09:18:34 compute-0 quirky_herschel[100681]:                 "ceph.cluster_name": "ceph",
Dec 11 09:18:34 compute-0 quirky_herschel[100681]:                 "ceph.crush_device_class": "",
Dec 11 09:18:34 compute-0 quirky_herschel[100681]:                 "ceph.encrypted": "0",
Dec 11 09:18:34 compute-0 quirky_herschel[100681]:                 "ceph.osd_fsid": "865a308b-cdc3-4034-b5eb-feb596b462bf",
Dec 11 09:18:34 compute-0 quirky_herschel[100681]:                 "ceph.osd_id": "1",
Dec 11 09:18:34 compute-0 quirky_herschel[100681]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 11 09:18:34 compute-0 quirky_herschel[100681]:                 "ceph.type": "block",
Dec 11 09:18:34 compute-0 quirky_herschel[100681]:                 "ceph.vdo": "0",
Dec 11 09:18:34 compute-0 quirky_herschel[100681]:                 "ceph.with_tpm": "0"
Dec 11 09:18:34 compute-0 quirky_herschel[100681]:             },
Dec 11 09:18:34 compute-0 quirky_herschel[100681]:             "type": "block",
Dec 11 09:18:34 compute-0 quirky_herschel[100681]:             "vg_name": "ceph_vg0"
Dec 11 09:18:34 compute-0 quirky_herschel[100681]:         }
Dec 11 09:18:34 compute-0 quirky_herschel[100681]:     ]
Dec 11 09:18:34 compute-0 quirky_herschel[100681]: }
Dec 11 09:18:34 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: ::ffff:192.168.122.100 - - [11/Dec/2025:09:18:34] "GET /metrics HTTP/1.1" 200 46583 "" "Prometheus/2.51.0"
Dec 11 09:18:34 compute-0 ceph-mgr[74715]: [prometheus INFO cherrypy.access.139680037156848] ::ffff:192.168.122.100 - - [11/Dec/2025:09:18:34] "GET /metrics HTTP/1.1" 200 46583 "" "Prometheus/2.51.0"
Dec 11 09:18:34 compute-0 systemd[1]: libpod-5c0de94d132d09a2093852e36182d6e50a1b05dfdd123f76e0bf76acc377db9b.scope: Deactivated successfully.
Dec 11 09:18:34 compute-0 podman[100664]: 2025-12-11 09:18:34.474534388 +0000 UTC m=+0.445002020 container died 5c0de94d132d09a2093852e36182d6e50a1b05dfdd123f76e0bf76acc377db9b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_herschel, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:18:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-5174ca156906a077c075ecb3540e6fe53e821ec6d88ba582fd68ef097fcf6507-merged.mount: Deactivated successfully.
Dec 11 09:18:34 compute-0 podman[100664]: 2025-12-11 09:18:34.519221526 +0000 UTC m=+0.489689158 container remove 5c0de94d132d09a2093852e36182d6e50a1b05dfdd123f76e0bf76acc377db9b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec 11 09:18:34 compute-0 systemd[1]: libpod-conmon-5c0de94d132d09a2093852e36182d6e50a1b05dfdd123f76e0bf76acc377db9b.scope: Deactivated successfully.
Dec 11 09:18:34 compute-0 sudo[100560]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:34 compute-0 sudo[100703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:18:34 compute-0 sudo[100703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:34 compute-0 sudo[100703]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:34 compute-0 sudo[100728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -- raw list --format json
Dec 11 09:18:34 compute-0 sudo[100728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:34 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:34 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef70004440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:34 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:18:34 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:34 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:18:34.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:34 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:18:34 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 11 09:18:34 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:18:34.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 11 09:18:35 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v16: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 244 B/s rd, 0 op/s; 52 B/s, 2 objects/s recovering
Dec 11 09:18:35 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0)
Dec 11 09:18:35 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Dec 11 09:18:35 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:35 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef60003490 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:35 compute-0 podman[100794]: 2025-12-11 09:18:35.125289961 +0000 UTC m=+0.041115928 container create 6f9bc05ad086be0bb69cdac0f9f66f2a483098bb28a39f933d8a31f342a76bc5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_proskuriakova, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 11 09:18:35 compute-0 systemd[1]: Started libpod-conmon-6f9bc05ad086be0bb69cdac0f9f66f2a483098bb28a39f933d8a31f342a76bc5.scope.
Dec 11 09:18:35 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:18:35 compute-0 podman[100794]: 2025-12-11 09:18:35.197024934 +0000 UTC m=+0.112850941 container init 6f9bc05ad086be0bb69cdac0f9f66f2a483098bb28a39f933d8a31f342a76bc5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_proskuriakova, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:18:35 compute-0 podman[100794]: 2025-12-11 09:18:35.107878734 +0000 UTC m=+0.023704731 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:18:35 compute-0 podman[100794]: 2025-12-11 09:18:35.205969749 +0000 UTC m=+0.121795736 container start 6f9bc05ad086be0bb69cdac0f9f66f2a483098bb28a39f933d8a31f342a76bc5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_proskuriakova, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:18:35 compute-0 podman[100794]: 2025-12-11 09:18:35.209451316 +0000 UTC m=+0.125277313 container attach 6f9bc05ad086be0bb69cdac0f9f66f2a483098bb28a39f933d8a31f342a76bc5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_proskuriakova, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:18:35 compute-0 festive_proskuriakova[100810]: 167 167
Dec 11 09:18:35 compute-0 systemd[1]: libpod-6f9bc05ad086be0bb69cdac0f9f66f2a483098bb28a39f933d8a31f342a76bc5.scope: Deactivated successfully.
Dec 11 09:18:35 compute-0 podman[100794]: 2025-12-11 09:18:35.211722146 +0000 UTC m=+0.127548123 container died 6f9bc05ad086be0bb69cdac0f9f66f2a483098bb28a39f933d8a31f342a76bc5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_proskuriakova, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:18:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a31f37035352da538d799ff9f4b40a1fc0bc3e5788b931e072854287920acab-merged.mount: Deactivated successfully.
Dec 11 09:18:35 compute-0 podman[100794]: 2025-12-11 09:18:35.248394957 +0000 UTC m=+0.164220934 container remove 6f9bc05ad086be0bb69cdac0f9f66f2a483098bb28a39f933d8a31f342a76bc5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_proskuriakova, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:18:35 compute-0 systemd[1]: libpod-conmon-6f9bc05ad086be0bb69cdac0f9f66f2a483098bb28a39f933d8a31f342a76bc5.scope: Deactivated successfully.
Dec 11 09:18:35 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Dec 11 09:18:35 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Dec 11 09:18:35 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Dec 11 09:18:35 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Dec 11 09:18:35 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Dec 11 09:18:35 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 121 pg[10.14( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=76/76 les/c/f=77/77/0 sis=121) [1] r=0 lpr=121 pi=[76,121)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:18:35 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:35 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef4c0036c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:35 compute-0 podman[100834]: 2025-12-11 09:18:35.407734249 +0000 UTC m=+0.039081086 container create cd6faaa3245298a46ab02b9a5ac50e7fdd335d77fdb8bd826d299ca6e2211f01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_keldysh, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True)
Dec 11 09:18:35 compute-0 systemd[1]: Started libpod-conmon-cd6faaa3245298a46ab02b9a5ac50e7fdd335d77fdb8bd826d299ca6e2211f01.scope.
Dec 11 09:18:35 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:18:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1c7e3f86bccadd6d783017ce0533db317437b41f5b95bcac8824e27e5fee4b4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 11 09:18:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1c7e3f86bccadd6d783017ce0533db317437b41f5b95bcac8824e27e5fee4b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:18:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1c7e3f86bccadd6d783017ce0533db317437b41f5b95bcac8824e27e5fee4b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:18:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1c7e3f86bccadd6d783017ce0533db317437b41f5b95bcac8824e27e5fee4b4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 11 09:18:35 compute-0 podman[100834]: 2025-12-11 09:18:35.392027165 +0000 UTC m=+0.023374022 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:18:35 compute-0 podman[100834]: 2025-12-11 09:18:35.496785796 +0000 UTC m=+0.128132653 container init cd6faaa3245298a46ab02b9a5ac50e7fdd335d77fdb8bd826d299ca6e2211f01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_keldysh, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Dec 11 09:18:35 compute-0 podman[100834]: 2025-12-11 09:18:35.506842455 +0000 UTC m=+0.138189302 container start cd6faaa3245298a46ab02b9a5ac50e7fdd335d77fdb8bd826d299ca6e2211f01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:18:35 compute-0 podman[100834]: 2025-12-11 09:18:35.511331083 +0000 UTC m=+0.142677940 container attach cd6faaa3245298a46ab02b9a5ac50e7fdd335d77fdb8bd826d299ca6e2211f01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 11 09:18:36 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:18:36 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Dec 11 09:18:36 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Dec 11 09:18:36 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Dec 11 09:18:36 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 122 pg[10.14( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=76/76 les/c/f=77/77/0 sis=122) [1]/[2] r=-1 lpr=122 pi=[76,122)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:18:36 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 122 pg[10.14( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=76/76 les/c/f=77/77/0 sis=122) [1]/[2] r=-1 lpr=122 pi=[76,122)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 11 09:18:36 compute-0 ceph-mon[74426]: pgmap v16: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 244 B/s rd, 0 op/s; 52 B/s, 2 objects/s recovering
Dec 11 09:18:36 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Dec 11 09:18:36 compute-0 ceph-mon[74426]: osdmap e121: 3 total, 3 up, 3 in
Dec 11 09:18:36 compute-0 ceph-mon[74426]: osdmap e122: 3 total, 3 up, 3 in
Dec 11 09:18:36 compute-0 lvm[100925]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 11 09:18:36 compute-0 lvm[100925]: VG ceph_vg0 finished
Dec 11 09:18:36 compute-0 condescending_keldysh[100851]: {}
Dec 11 09:18:36 compute-0 systemd[1]: libpod-cd6faaa3245298a46ab02b9a5ac50e7fdd335d77fdb8bd826d299ca6e2211f01.scope: Deactivated successfully.
Dec 11 09:18:36 compute-0 systemd[1]: libpod-cd6faaa3245298a46ab02b9a5ac50e7fdd335d77fdb8bd826d299ca6e2211f01.scope: Consumed 1.610s CPU time.
Dec 11 09:18:36 compute-0 podman[100834]: 2025-12-11 09:18:36.403123949 +0000 UTC m=+1.034470796 container died cd6faaa3245298a46ab02b9a5ac50e7fdd335d77fdb8bd826d299ca6e2211f01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_keldysh, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 11 09:18:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1c7e3f86bccadd6d783017ce0533db317437b41f5b95bcac8824e27e5fee4b4-merged.mount: Deactivated successfully.
Dec 11 09:18:36 compute-0 podman[100834]: 2025-12-11 09:18:36.626537056 +0000 UTC m=+1.257883893 container remove cd6faaa3245298a46ab02b9a5ac50e7fdd335d77fdb8bd826d299ca6e2211f01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_keldysh, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec 11 09:18:36 compute-0 systemd[1]: libpod-conmon-cd6faaa3245298a46ab02b9a5ac50e7fdd335d77fdb8bd826d299ca6e2211f01.scope: Deactivated successfully.
Dec 11 09:18:36 compute-0 sudo[100728]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:36 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 11 09:18:36 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:36 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 11 09:18:36 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:36 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Dec 11 09:18:36 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:36 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:36 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef50003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:36 compute-0 sudo[100965]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehoepkevyjjsmxbyrmdgvaoqmnojnvof ; /usr/bin/python3'
Dec 11 09:18:36 compute-0 sudo[100965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:18:36 compute-0 sudo[100966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 11 09:18:36 compute-0 sudo[100966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:36 compute-0 sudo[100966]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:36 compute-0 sudo[100967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 11 09:18:36 compute-0 sudo[100967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:36 compute-0 sudo[100967]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:36 compute-0 python3[100997]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps --daemon-type rgw --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:18:36 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:18:36 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:36 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:18:36.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:36 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:18:36 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000031s ======
Dec 11 09:18:36 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:18:36.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 11 09:18:37 compute-0 podman[101018]: 2025-12-11 09:18:37.001762555 +0000 UTC m=+0.048047372 container create 7667c7169cd72313aa120762f610a470d55bb174f89a2d7ee03f41c9ea4cc7b1 (image=quay.io/ceph/ceph:v19, name=trusting_curran, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Dec 11 09:18:37 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (monmap changed)...
Dec 11 09:18:37 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (monmap changed)...
Dec 11 09:18:37 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec 11 09:18:37 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 11 09:18:37 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec 11 09:18:37 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 11 09:18:37 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 11 09:18:37 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:18:37 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Dec 11 09:18:37 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Dec 11 09:18:37 compute-0 systemd[1]: Started libpod-conmon-7667c7169cd72313aa120762f610a470d55bb174f89a2d7ee03f41c9ea4cc7b1.scope.
Dec 11 09:18:37 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:18:37 compute-0 sudo[101032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:18:37 compute-0 podman[101018]: 2025-12-11 09:18:36.983219064 +0000 UTC m=+0.029503901 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:18:37 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v19: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 212 B/s rd, 0 op/s; 45 B/s, 1 objects/s recovering
Dec 11 09:18:37 compute-0 sudo[101032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1765fd309a0505bd603384bec61e6dbbf42e074f1e5fdbabf9723889dcb7ba55/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:18:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1765fd309a0505bd603384bec61e6dbbf42e074f1e5fdbabf9723889dcb7ba55/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:18:37 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0)
Dec 11 09:18:37 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Dec 11 09:18:37 compute-0 sudo[101032]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:37 compute-0 podman[101018]: 2025-12-11 09:18:37.091909025 +0000 UTC m=+0.138193862 container init 7667c7169cd72313aa120762f610a470d55bb174f89a2d7ee03f41c9ea4cc7b1 (image=quay.io/ceph/ceph:v19, name=trusting_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec 11 09:18:37 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:37 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef70004440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:37 compute-0 podman[101018]: 2025-12-11 09:18:37.102432139 +0000 UTC m=+0.148716956 container start 7667c7169cd72313aa120762f610a470d55bb174f89a2d7ee03f41c9ea4cc7b1 (image=quay.io/ceph/ceph:v19, name=trusting_curran, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec 11 09:18:37 compute-0 podman[101018]: 2025-12-11 09:18:37.10636456 +0000 UTC m=+0.152649367 container attach 7667c7169cd72313aa120762f610a470d55bb174f89a2d7ee03f41c9ea4cc7b1 (image=quay.io/ceph/ceph:v19, name=trusting_curran, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 11 09:18:37 compute-0 sudo[101062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph:v19 --timeout 895 _orch deploy --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:18:37 compute-0 sudo[101062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:37 compute-0 trusting_curran[101055]: ERROR: invalid flag --daemon-type
Dec 11 09:18:37 compute-0 systemd[1]: libpod-7667c7169cd72313aa120762f610a470d55bb174f89a2d7ee03f41c9ea4cc7b1.scope: Deactivated successfully.
Dec 11 09:18:37 compute-0 podman[101018]: 2025-12-11 09:18:37.159140917 +0000 UTC m=+0.205425734 container died 7667c7169cd72313aa120762f610a470d55bb174f89a2d7ee03f41c9ea4cc7b1 (image=quay.io/ceph/ceph:v19, name=trusting_curran, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:18:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-1765fd309a0505bd603384bec61e6dbbf42e074f1e5fdbabf9723889dcb7ba55-merged.mount: Deactivated successfully.
Dec 11 09:18:37 compute-0 podman[101018]: 2025-12-11 09:18:37.197069496 +0000 UTC m=+0.243354313 container remove 7667c7169cd72313aa120762f610a470d55bb174f89a2d7ee03f41c9ea4cc7b1 (image=quay.io/ceph/ceph:v19, name=trusting_curran, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec 11 09:18:37 compute-0 systemd[1]: libpod-conmon-7667c7169cd72313aa120762f610a470d55bb174f89a2d7ee03f41c9ea4cc7b1.scope: Deactivated successfully.
Dec 11 09:18:37 compute-0 sudo[100965]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:37 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Dec 11 09:18:37 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec 11 09:18:37 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Dec 11 09:18:37 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Dec 11 09:18:37 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:37 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef60003490 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:37 compute-0 podman[101132]: 2025-12-11 09:18:37.496987863 +0000 UTC m=+0.046687440 container create b51487c49373e2489d820f12e95456ea6ab79dc84b98c66d32a63de9c9799b42 (image=quay.io/ceph/ceph:v19, name=adoring_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:18:37 compute-0 systemd[1]: Started libpod-conmon-b51487c49373e2489d820f12e95456ea6ab79dc84b98c66d32a63de9c9799b42.scope.
Dec 11 09:18:37 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:18:37 compute-0 podman[101132]: 2025-12-11 09:18:37.476222854 +0000 UTC m=+0.025922461 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:18:37 compute-0 podman[101132]: 2025-12-11 09:18:37.58087816 +0000 UTC m=+0.130577737 container init b51487c49373e2489d820f12e95456ea6ab79dc84b98c66d32a63de9c9799b42 (image=quay.io/ceph/ceph:v19, name=adoring_mccarthy, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 11 09:18:37 compute-0 podman[101132]: 2025-12-11 09:18:37.588908767 +0000 UTC m=+0.138608344 container start b51487c49373e2489d820f12e95456ea6ab79dc84b98c66d32a63de9c9799b42 (image=quay.io/ceph/ceph:v19, name=adoring_mccarthy, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec 11 09:18:37 compute-0 podman[101132]: 2025-12-11 09:18:37.592642593 +0000 UTC m=+0.142342190 container attach b51487c49373e2489d820f12e95456ea6ab79dc84b98c66d32a63de9c9799b42 (image=quay.io/ceph/ceph:v19, name=adoring_mccarthy, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec 11 09:18:37 compute-0 adoring_mccarthy[101148]: 167 167
Dec 11 09:18:37 compute-0 systemd[1]: libpod-b51487c49373e2489d820f12e95456ea6ab79dc84b98c66d32a63de9c9799b42.scope: Deactivated successfully.
Dec 11 09:18:37 compute-0 conmon[101148]: conmon b51487c49373e2489d82 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b51487c49373e2489d820f12e95456ea6ab79dc84b98c66d32a63de9c9799b42.scope/container/memory.events
Dec 11 09:18:37 compute-0 podman[101132]: 2025-12-11 09:18:37.594557961 +0000 UTC m=+0.144257538 container died b51487c49373e2489d820f12e95456ea6ab79dc84b98c66d32a63de9c9799b42 (image=quay.io/ceph/ceph:v19, name=adoring_mccarthy, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:18:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-639df66ad69f340e36858c8a2c438d252636c0fb5fa4f479b8e9acf107a4d664-merged.mount: Deactivated successfully.
Dec 11 09:18:37 compute-0 podman[101132]: 2025-12-11 09:18:37.633649827 +0000 UTC m=+0.183349414 container remove b51487c49373e2489d820f12e95456ea6ab79dc84b98c66d32a63de9c9799b42 (image=quay.io/ceph/ceph:v19, name=adoring_mccarthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:18:37 compute-0 systemd[1]: libpod-conmon-b51487c49373e2489d820f12e95456ea6ab79dc84b98c66d32a63de9c9799b42.scope: Deactivated successfully.
Dec 11 09:18:37 compute-0 sudo[101062]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:37 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 11 09:18:37 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:37 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 11 09:18:37 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:37 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:37 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:37 compute-0 ceph-mon[74426]: Reconfiguring mon.compute-0 (monmap changed)...
Dec 11 09:18:37 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 11 09:18:37 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 11 09:18:37 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:18:37 compute-0 ceph-mon[74426]: Reconfiguring daemon mon.compute-0 on compute-0
Dec 11 09:18:37 compute-0 ceph-mon[74426]: pgmap v19: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 212 B/s rd, 0 op/s; 45 B/s, 1 objects/s recovering
Dec 11 09:18:37 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Dec 11 09:18:37 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec 11 09:18:37 compute-0 ceph-mon[74426]: osdmap e123: 3 total, 3 up, 3 in
Dec 11 09:18:37 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:37 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.wwpcae (monmap changed)...
Dec 11 09:18:37 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.wwpcae (monmap changed)...
Dec 11 09:18:37 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.wwpcae", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec 11 09:18:37 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.wwpcae", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 11 09:18:37 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 11 09:18:37 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 11 09:18:37 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 11 09:18:37 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:18:37 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.wwpcae on compute-0
Dec 11 09:18:37 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.wwpcae on compute-0
Dec 11 09:18:37 compute-0 sudo[101166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:18:37 compute-0 sudo[101166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:37 compute-0 sudo[101166]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:37 compute-0 sudo[101191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph:v19 --timeout 895 _orch deploy --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:18:37 compute-0 sudo[101191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:38 compute-0 podman[101233]: 2025-12-11 09:18:38.212608156 +0000 UTC m=+0.047138034 container create ec25755e97624e99acacd30023063fa8f6838297796164ea46c55ee4f4fe5bfb (image=quay.io/ceph/ceph:v19, name=wizardly_mestorf, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:18:38 compute-0 systemd[1]: Started libpod-conmon-ec25755e97624e99acacd30023063fa8f6838297796164ea46c55ee4f4fe5bfb.scope.
Dec 11 09:18:38 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 11 09:18:38 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 11 09:18:38 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:18:38 compute-0 podman[101233]: 2025-12-11 09:18:38.284508604 +0000 UTC m=+0.119038492 container init ec25755e97624e99acacd30023063fa8f6838297796164ea46c55ee4f4fe5bfb (image=quay.io/ceph/ceph:v19, name=wizardly_mestorf, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:18:38 compute-0 podman[101233]: 2025-12-11 09:18:38.19389457 +0000 UTC m=+0.028424448 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:18:38 compute-0 podman[101233]: 2025-12-11 09:18:38.295485952 +0000 UTC m=+0.130015810 container start ec25755e97624e99acacd30023063fa8f6838297796164ea46c55ee4f4fe5bfb (image=quay.io/ceph/ceph:v19, name=wizardly_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec 11 09:18:38 compute-0 wizardly_mestorf[101249]: 167 167
Dec 11 09:18:38 compute-0 podman[101233]: 2025-12-11 09:18:38.298752623 +0000 UTC m=+0.133282501 container attach ec25755e97624e99acacd30023063fa8f6838297796164ea46c55ee4f4fe5bfb (image=quay.io/ceph/ceph:v19, name=wizardly_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 11 09:18:38 compute-0 systemd[1]: libpod-ec25755e97624e99acacd30023063fa8f6838297796164ea46c55ee4f4fe5bfb.scope: Deactivated successfully.
Dec 11 09:18:38 compute-0 conmon[101249]: conmon ec25755e97624e99acac <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ec25755e97624e99acacd30023063fa8f6838297796164ea46c55ee4f4fe5bfb.scope/container/memory.events
Dec 11 09:18:38 compute-0 podman[101233]: 2025-12-11 09:18:38.300087324 +0000 UTC m=+0.134617182 container died ec25755e97624e99acacd30023063fa8f6838297796164ea46c55ee4f4fe5bfb (image=quay.io/ceph/ceph:v19, name=wizardly_mestorf, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:18:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9daacc86289595df1358bde115d649a62b5d9106d0079e7e4b960dd7d68bbe6-merged.mount: Deactivated successfully.
Dec 11 09:18:38 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Dec 11 09:18:38 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Dec 11 09:18:38 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Dec 11 09:18:38 compute-0 podman[101233]: 2025-12-11 09:18:38.338950652 +0000 UTC m=+0.173480510 container remove ec25755e97624e99acacd30023063fa8f6838297796164ea46c55ee4f4fe5bfb (image=quay.io/ceph/ceph:v19, name=wizardly_mestorf, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 11 09:18:38 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 124 pg[10.14( v 56'1088 (0'0,56'1088] local-lis/les=0/0 n=5 ec=61/50 lis/c=122/76 les/c/f=123/77/0 sis=124) [1] r=0 lpr=124 pi=[76,124)/1 luod=0'0 crt=56'1088 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:18:38 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 124 pg[10.14( v 56'1088 (0'0,56'1088] local-lis/les=0/0 n=5 ec=61/50 lis/c=122/76 les/c/f=123/77/0 sis=124) [1] r=0 lpr=124 pi=[76,124)/1 crt=56'1088 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:18:38 compute-0 systemd[1]: libpod-conmon-ec25755e97624e99acacd30023063fa8f6838297796164ea46c55ee4f4fe5bfb.scope: Deactivated successfully.
Dec 11 09:18:38 compute-0 sudo[101191]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:38 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 11 09:18:38 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:38 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 11 09:18:38 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:38 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-0 (monmap changed)...
Dec 11 09:18:38 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-0 (monmap changed)...
Dec 11 09:18:38 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec 11 09:18:38 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 11 09:18:38 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 11 09:18:38 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:18:38 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-0 on compute-0
Dec 11 09:18:38 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-0 on compute-0
Dec 11 09:18:38 compute-0 sudo[101266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:18:38 compute-0 sudo[101266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:38 compute-0 sudo[101266]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:38 compute-0 sudo[101291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:18:38 compute-0 sudo[101291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:38 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:38 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:38 compute-0 ceph-mon[74426]: Reconfiguring mgr.compute-0.wwpcae (monmap changed)...
Dec 11 09:18:38 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.wwpcae", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 11 09:18:38 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 11 09:18:38 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:18:38 compute-0 ceph-mon[74426]: Reconfiguring daemon mgr.compute-0.wwpcae on compute-0
Dec 11 09:18:38 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 11 09:18:38 compute-0 ceph-mon[74426]: osdmap e124: 3 total, 3 up, 3 in
Dec 11 09:18:38 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:38 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:38 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 11 09:18:38 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:18:38 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:38 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef4c0036c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:38 compute-0 podman[101335]: 2025-12-11 09:18:38.877270749 +0000 UTC m=+0.045586457 container create 049fc103972c95aa10582efb43d5cf7f30127ae1a0f5731d9806d43f8896b88e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_moser, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:18:38 compute-0 systemd[1]: Started libpod-conmon-049fc103972c95aa10582efb43d5cf7f30127ae1a0f5731d9806d43f8896b88e.scope.
Dec 11 09:18:38 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:18:38 compute-0 podman[101335]: 2025-12-11 09:18:38.860775341 +0000 UTC m=+0.029091069 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:18:38 compute-0 podman[101335]: 2025-12-11 09:18:38.953550401 +0000 UTC m=+0.121866109 container init 049fc103972c95aa10582efb43d5cf7f30127ae1a0f5731d9806d43f8896b88e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_moser, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 11 09:18:38 compute-0 podman[101335]: 2025-12-11 09:18:38.959097282 +0000 UTC m=+0.127412990 container start 049fc103972c95aa10582efb43d5cf7f30127ae1a0f5731d9806d43f8896b88e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 11 09:18:38 compute-0 podman[101335]: 2025-12-11 09:18:38.962168207 +0000 UTC m=+0.130483915 container attach 049fc103972c95aa10582efb43d5cf7f30127ae1a0f5731d9806d43f8896b88e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_moser, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:18:38 compute-0 romantic_moser[101351]: 167 167
Dec 11 09:18:38 compute-0 systemd[1]: libpod-049fc103972c95aa10582efb43d5cf7f30127ae1a0f5731d9806d43f8896b88e.scope: Deactivated successfully.
Dec 11 09:18:38 compute-0 podman[101335]: 2025-12-11 09:18:38.964291912 +0000 UTC m=+0.132607620 container died 049fc103972c95aa10582efb43d5cf7f30127ae1a0f5731d9806d43f8896b88e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_moser, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 11 09:18:38 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:18:38 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000031s ======
Dec 11 09:18:38 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:18:38.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 11 09:18:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f4580229aa4dde03c6eec70bf3205018d859b9f193a1af061913108d8134a17-merged.mount: Deactivated successfully.
Dec 11 09:18:38 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:18:38 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:38 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:18:38.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:39 compute-0 podman[101335]: 2025-12-11 09:18:39.002637914 +0000 UTC m=+0.170953622 container remove 049fc103972c95aa10582efb43d5cf7f30127ae1a0f5731d9806d43f8896b88e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_moser, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 11 09:18:39 compute-0 systemd[1]: libpod-conmon-049fc103972c95aa10582efb43d5cf7f30127ae1a0f5731d9806d43f8896b88e.scope: Deactivated successfully.
Dec 11 09:18:39 compute-0 sudo[101291]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:39 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 11 09:18:39 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:39 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 11 09:18:39 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v22: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:18:39 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:39 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0)
Dec 11 09:18:39 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Dec 11 09:18:39 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)...
Dec 11 09:18:39 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)...
Dec 11 09:18:39 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Dec 11 09:18:39 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Dec 11 09:18:39 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 11 09:18:39 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:18:39 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on compute-0
Dec 11 09:18:39 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on compute-0
Dec 11 09:18:39 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:39 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef50003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:39 compute-0 sudo[101369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:18:39 compute-0 sudo[101369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:39 compute-0 sudo[101369]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:39 compute-0 sudo[101394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:18:39 compute-0 sudo[101394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:39 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Dec 11 09:18:39 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Dec 11 09:18:39 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Dec 11 09:18:39 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Dec 11 09:18:39 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 125 pg[10.14( v 56'1088 (0'0,56'1088] local-lis/les=124/125 n=5 ec=61/50 lis/c=122/76 les/c/f=123/77/0 sis=124) [1] r=0 lpr=124 pi=[76,124)/1 crt=56'1088 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:18:39 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:39 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef70004460 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:39 compute-0 podman[101435]: 2025-12-11 09:18:39.546815372 +0000 UTC m=+0.042780480 container create 15f95934d950ca21ee2be2452ef05f139f3b912be3fadfb540e2f330a8b06754 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_ptolemy, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:18:39 compute-0 systemd[1]: Started libpod-conmon-15f95934d950ca21ee2be2452ef05f139f3b912be3fadfb540e2f330a8b06754.scope.
Dec 11 09:18:39 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:18:39 compute-0 podman[101435]: 2025-12-11 09:18:39.615809319 +0000 UTC m=+0.111774447 container init 15f95934d950ca21ee2be2452ef05f139f3b912be3fadfb540e2f330a8b06754 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Dec 11 09:18:39 compute-0 podman[101435]: 2025-12-11 09:18:39.622425613 +0000 UTC m=+0.118390721 container start 15f95934d950ca21ee2be2452ef05f139f3b912be3fadfb540e2f330a8b06754 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_ptolemy, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 11 09:18:39 compute-0 podman[101435]: 2025-12-11 09:18:39.530707876 +0000 UTC m=+0.026673004 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:18:39 compute-0 modest_ptolemy[101452]: 167 167
Dec 11 09:18:39 compute-0 podman[101435]: 2025-12-11 09:18:39.626752076 +0000 UTC m=+0.122717204 container attach 15f95934d950ca21ee2be2452ef05f139f3b912be3fadfb540e2f330a8b06754 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Dec 11 09:18:39 compute-0 conmon[101452]: conmon 15f95934d950ca21ee2b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-15f95934d950ca21ee2be2452ef05f139f3b912be3fadfb540e2f330a8b06754.scope/container/memory.events
Dec 11 09:18:39 compute-0 systemd[1]: libpod-15f95934d950ca21ee2be2452ef05f139f3b912be3fadfb540e2f330a8b06754.scope: Deactivated successfully.
Dec 11 09:18:39 compute-0 podman[101435]: 2025-12-11 09:18:39.628000505 +0000 UTC m=+0.123965613 container died 15f95934d950ca21ee2be2452ef05f139f3b912be3fadfb540e2f330a8b06754 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_ptolemy, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:18:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d8742ebd35a856e62cf1c8d89a13772c4ffc3e15cf2804bc1941fd020a79d92-merged.mount: Deactivated successfully.
Dec 11 09:18:40 compute-0 ceph-mon[74426]: Reconfiguring crash.compute-0 (monmap changed)...
Dec 11 09:18:40 compute-0 ceph-mon[74426]: Reconfiguring daemon crash.compute-0 on compute-0
Dec 11 09:18:40 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:40 compute-0 ceph-mon[74426]: pgmap v22: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:18:40 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:40 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Dec 11 09:18:40 compute-0 ceph-mon[74426]: Reconfiguring osd.1 (monmap changed)...
Dec 11 09:18:40 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Dec 11 09:18:40 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:18:40 compute-0 ceph-mon[74426]: Reconfiguring daemon osd.1 on compute-0
Dec 11 09:18:40 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Dec 11 09:18:40 compute-0 ceph-mon[74426]: osdmap e125: 3 total, 3 up, 3 in
Dec 11 09:18:40 compute-0 podman[101435]: 2025-12-11 09:18:40.420656314 +0000 UTC m=+0.916621422 container remove 15f95934d950ca21ee2be2452ef05f139f3b912be3fadfb540e2f330a8b06754 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_ptolemy, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 11 09:18:40 compute-0 systemd[1]: libpod-conmon-15f95934d950ca21ee2be2452ef05f139f3b912be3fadfb540e2f330a8b06754.scope: Deactivated successfully.
Dec 11 09:18:40 compute-0 sudo[101394]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:40 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 11 09:18:40 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:40 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 11 09:18:40 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:40 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Reconfiguring alertmanager.compute-0 (dependencies changed)...
Dec 11 09:18:40 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Reconfiguring alertmanager.compute-0 (dependencies changed)...
Dec 11 09:18:40 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Reconfiguring daemon alertmanager.compute-0 on compute-0
Dec 11 09:18:40 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Reconfiguring daemon alertmanager.compute-0 on compute-0
Dec 11 09:18:40 compute-0 sudo[101477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:18:40 compute-0 sudo[101477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:40 compute-0 sudo[101477]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:40 compute-0 sudo[101503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/alertmanager:v0.25.0 --timeout 895 _orch deploy --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:18:40 compute-0 sudo[101503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:40 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:40 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef60003490 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:40 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:18:40 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:40 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:18:40.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:40 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:18:40 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:40 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:18:40.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:41 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v24: 353 pgs: 1 peering, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 848 B/s rd, 212 B/s wr, 1 op/s; 0 B/s, 1 objects/s recovering
Dec 11 09:18:41 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:41 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef4c0036c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:41 compute-0 podman[101544]: 2025-12-11 09:18:41.111009337 +0000 UTC m=+0.050863269 volume create c8bf2292f4dfed4f3fec13eea5287660245a74fd041037f80adac7338e8a54de
Dec 11 09:18:41 compute-0 podman[101544]: 2025-12-11 09:18:41.171098871 +0000 UTC m=+0.110952793 container create c94d1c403b52b34dbcb269752c94b1ead49786e5ec2ae16be2432c75dc6df2e0 (image=quay.io/prometheus/alertmanager:v0.25.0, name=eager_benz, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 09:18:41 compute-0 podman[101544]: 2025-12-11 09:18:41.093565749 +0000 UTC m=+0.033419691 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Dec 11 09:18:41 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:41 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 11 09:18:41 compute-0 systemd[1]: Started libpod-conmon-c94d1c403b52b34dbcb269752c94b1ead49786e5ec2ae16be2432c75dc6df2e0.scope.
Dec 11 09:18:41 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:18:41 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:18:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa20c9c969bcd5a077eeea2b32298b476edd7398a55386f5cf23fb0aa834969e/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Dec 11 09:18:41 compute-0 podman[101544]: 2025-12-11 09:18:41.268813283 +0000 UTC m=+0.208667235 container init c94d1c403b52b34dbcb269752c94b1ead49786e5ec2ae16be2432c75dc6df2e0 (image=quay.io/prometheus/alertmanager:v0.25.0, name=eager_benz, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 09:18:41 compute-0 podman[101544]: 2025-12-11 09:18:41.276051716 +0000 UTC m=+0.215905628 container start c94d1c403b52b34dbcb269752c94b1ead49786e5ec2ae16be2432c75dc6df2e0 (image=quay.io/prometheus/alertmanager:v0.25.0, name=eager_benz, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 09:18:41 compute-0 eager_benz[101560]: 65534 65534
Dec 11 09:18:41 compute-0 systemd[1]: libpod-c94d1c403b52b34dbcb269752c94b1ead49786e5ec2ae16be2432c75dc6df2e0.scope: Deactivated successfully.
Dec 11 09:18:41 compute-0 podman[101544]: 2025-12-11 09:18:41.27972908 +0000 UTC m=+0.219583022 container attach c94d1c403b52b34dbcb269752c94b1ead49786e5ec2ae16be2432c75dc6df2e0 (image=quay.io/prometheus/alertmanager:v0.25.0, name=eager_benz, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 09:18:41 compute-0 podman[101544]: 2025-12-11 09:18:41.280891895 +0000 UTC m=+0.220745817 container died c94d1c403b52b34dbcb269752c94b1ead49786e5ec2ae16be2432c75dc6df2e0 (image=quay.io/prometheus/alertmanager:v0.25.0, name=eager_benz, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 09:18:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa20c9c969bcd5a077eeea2b32298b476edd7398a55386f5cf23fb0aa834969e-merged.mount: Deactivated successfully.
Dec 11 09:18:41 compute-0 podman[101544]: 2025-12-11 09:18:41.321972242 +0000 UTC m=+0.261826164 container remove c94d1c403b52b34dbcb269752c94b1ead49786e5ec2ae16be2432c75dc6df2e0 (image=quay.io/prometheus/alertmanager:v0.25.0, name=eager_benz, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 09:18:41 compute-0 podman[101544]: 2025-12-11 09:18:41.327778461 +0000 UTC m=+0.267632383 volume remove c8bf2292f4dfed4f3fec13eea5287660245a74fd041037f80adac7338e8a54de
Dec 11 09:18:41 compute-0 systemd[1]: libpod-conmon-c94d1c403b52b34dbcb269752c94b1ead49786e5ec2ae16be2432c75dc6df2e0.scope: Deactivated successfully.
Dec 11 09:18:41 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:41 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef4c0036c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:41 compute-0 podman[101577]: 2025-12-11 09:18:41.410276834 +0000 UTC m=+0.053753448 volume create 8a7ea901ef885b144fdbefbf5000cf21bcae1bafcaa607eeb0b0135007a18aae
Dec 11 09:18:41 compute-0 podman[101577]: 2025-12-11 09:18:41.421573132 +0000 UTC m=+0.065049746 container create 89e5447b44ead1210b74b5b6cc83e40e268ad2128fd556be0df368572214d0a9 (image=quay.io/prometheus/alertmanager:v0.25.0, name=sweet_lumiere, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 09:18:41 compute-0 systemd[1]: Started libpod-conmon-89e5447b44ead1210b74b5b6cc83e40e268ad2128fd556be0df368572214d0a9.scope.
Dec 11 09:18:41 compute-0 podman[101577]: 2025-12-11 09:18:41.390515495 +0000 UTC m=+0.033992129 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Dec 11 09:18:41 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:18:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b50dab5bd4534d2d11809cf1fd7391a999b35de308bb0d93e071472159f0e4e3/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Dec 11 09:18:41 compute-0 podman[101577]: 2025-12-11 09:18:41.519998647 +0000 UTC m=+0.163475281 container init 89e5447b44ead1210b74b5b6cc83e40e268ad2128fd556be0df368572214d0a9 (image=quay.io/prometheus/alertmanager:v0.25.0, name=sweet_lumiere, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 09:18:41 compute-0 podman[101577]: 2025-12-11 09:18:41.526604541 +0000 UTC m=+0.170081155 container start 89e5447b44ead1210b74b5b6cc83e40e268ad2128fd556be0df368572214d0a9 (image=quay.io/prometheus/alertmanager:v0.25.0, name=sweet_lumiere, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 09:18:41 compute-0 sweet_lumiere[101593]: 65534 65534
Dec 11 09:18:41 compute-0 systemd[1]: libpod-89e5447b44ead1210b74b5b6cc83e40e268ad2128fd556be0df368572214d0a9.scope: Deactivated successfully.
Dec 11 09:18:41 compute-0 podman[101577]: 2025-12-11 09:18:41.530494451 +0000 UTC m=+0.173971085 container attach 89e5447b44ead1210b74b5b6cc83e40e268ad2128fd556be0df368572214d0a9 (image=quay.io/prometheus/alertmanager:v0.25.0, name=sweet_lumiere, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 09:18:41 compute-0 podman[101577]: 2025-12-11 09:18:41.530908603 +0000 UTC m=+0.174385217 container died 89e5447b44ead1210b74b5b6cc83e40e268ad2128fd556be0df368572214d0a9 (image=quay.io/prometheus/alertmanager:v0.25.0, name=sweet_lumiere, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 09:18:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-b50dab5bd4534d2d11809cf1fd7391a999b35de308bb0d93e071472159f0e4e3-merged.mount: Deactivated successfully.
Dec 11 09:18:41 compute-0 podman[101577]: 2025-12-11 09:18:41.577379547 +0000 UTC m=+0.220856161 container remove 89e5447b44ead1210b74b5b6cc83e40e268ad2128fd556be0df368572214d0a9 (image=quay.io/prometheus/alertmanager:v0.25.0, name=sweet_lumiere, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 09:18:41 compute-0 podman[101577]: 2025-12-11 09:18:41.580725449 +0000 UTC m=+0.224202083 volume remove 8a7ea901ef885b144fdbefbf5000cf21bcae1bafcaa607eeb0b0135007a18aae
Dec 11 09:18:41 compute-0 systemd[1]: libpod-conmon-89e5447b44ead1210b74b5b6cc83e40e268ad2128fd556be0df368572214d0a9.scope: Deactivated successfully.
Dec 11 09:18:41 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:41 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:41 compute-0 ceph-mon[74426]: Reconfiguring alertmanager.compute-0 (dependencies changed)...
Dec 11 09:18:41 compute-0 ceph-mon[74426]: Reconfiguring daemon alertmanager.compute-0 on compute-0
Dec 11 09:18:41 compute-0 ceph-mon[74426]: pgmap v24: 353 pgs: 1 peering, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 848 B/s rd, 212 B/s wr, 1 op/s; 0 B/s, 1 objects/s recovering
Dec 11 09:18:41 compute-0 systemd[1]: Stopping Ceph alertmanager.compute-0 for 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060...
Dec 11 09:18:41 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-alertmanager-compute-0[96287]: ts=2025-12-11T09:18:41.832Z caller=main.go:583 level=info msg="Received SIGTERM, exiting gracefully..."
Dec 11 09:18:41 compute-0 podman[101641]: 2025-12-11 09:18:41.842543832 +0000 UTC m=+0.055357938 container died f6be69c7d77625e49bceb27822a33c1c522509b882e158ee75cfb836ad9761a2 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 09:18:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-b261e1b794c9e24e398799eb3612e9f64174800fe1aa024fbdb0b21a6083a158-merged.mount: Deactivated successfully.
Dec 11 09:18:41 compute-0 podman[101641]: 2025-12-11 09:18:41.881908415 +0000 UTC m=+0.094722501 container remove f6be69c7d77625e49bceb27822a33c1c522509b882e158ee75cfb836ad9761a2 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 09:18:41 compute-0 podman[101641]: 2025-12-11 09:18:41.88564369 +0000 UTC m=+0.098457796 volume remove d41bc3fc3ebefe39be0c02fbe1b70802d088e539dab7a004d605b30785c14792
Dec 11 09:18:41 compute-0 bash[101641]: ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-alertmanager-compute-0
Dec 11 09:18:42 compute-0 systemd[1]: ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060@alertmanager.compute-0.service: Deactivated successfully.
Dec 11 09:18:42 compute-0 systemd[1]: Stopped Ceph alertmanager.compute-0 for 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060.
Dec 11 09:18:42 compute-0 systemd[1]: ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060@alertmanager.compute-0.service: Consumed 1.266s CPU time.
Dec 11 09:18:42 compute-0 systemd[1]: Starting Ceph alertmanager.compute-0 for 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060...
Dec 11 09:18:42 compute-0 podman[101745]: 2025-12-11 09:18:42.284533029 +0000 UTC m=+0.045097972 volume create 4517a476accaae55cfc1ccffb50bfdf9e3b93139333574cf58335bcaae27b049
Dec 11 09:18:42 compute-0 podman[101745]: 2025-12-11 09:18:42.291988368 +0000 UTC m=+0.052553311 container create 3b9e0073af5e4c7c2b471c925a825b813ba9ef3bc5645cf6b5c090ee432c8a5d (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 09:18:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/deec1a2c73bcd7cc85819e2e6b227d914196804b028390de03ed5a8ced33072a/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Dec 11 09:18:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/deec1a2c73bcd7cc85819e2e6b227d914196804b028390de03ed5a8ced33072a/merged/etc/alertmanager supports timestamps until 2038 (0x7fffffff)
Dec 11 09:18:42 compute-0 podman[101745]: 2025-12-11 09:18:42.350696478 +0000 UTC m=+0.111261441 container init 3b9e0073af5e4c7c2b471c925a825b813ba9ef3bc5645cf6b5c090ee432c8a5d (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 09:18:42 compute-0 podman[101745]: 2025-12-11 09:18:42.357784517 +0000 UTC m=+0.118349460 container start 3b9e0073af5e4c7c2b471c925a825b813ba9ef3bc5645cf6b5c090ee432c8a5d (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 09:18:42 compute-0 podman[101745]: 2025-12-11 09:18:42.266040759 +0000 UTC m=+0.026605732 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Dec 11 09:18:42 compute-0 bash[101745]: 3b9e0073af5e4c7c2b471c925a825b813ba9ef3bc5645cf6b5c090ee432c8a5d
Dec 11 09:18:42 compute-0 systemd[1]: Started Ceph alertmanager.compute-0 for 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060.
Dec 11 09:18:42 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-alertmanager-compute-0[101760]: ts=2025-12-11T09:18:42.386Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)"
Dec 11 09:18:42 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-alertmanager-compute-0[101760]: ts=2025-12-11T09:18:42.386Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)"
Dec 11 09:18:42 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-alertmanager-compute-0[101760]: ts=2025-12-11T09:18:42.397Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.122.100 port=9094
Dec 11 09:18:42 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-alertmanager-compute-0[101760]: ts=2025-12-11T09:18:42.400Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s
Dec 11 09:18:42 compute-0 sudo[101503]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:42 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 11 09:18:42 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-alertmanager-compute-0[101760]: ts=2025-12-11T09:18:42.443Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml
Dec 11 09:18:42 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-alertmanager-compute-0[101760]: ts=2025-12-11T09:18:42.444Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml
Dec 11 09:18:42 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:42 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 11 09:18:42 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-alertmanager-compute-0[101760]: ts=2025-12-11T09:18:42.451Z caller=tls_config.go:232 level=info msg="Listening on" address=192.168.122.100:9093
Dec 11 09:18:42 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-alertmanager-compute-0[101760]: ts=2025-12-11T09:18:42.451Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=192.168.122.100:9093
Dec 11 09:18:42 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:42 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Reconfiguring grafana.compute-0 (dependencies changed)...
Dec 11 09:18:42 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Reconfiguring grafana.compute-0 (dependencies changed)...
Dec 11 09:18:42 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Reconfiguring daemon grafana.compute-0 on compute-0
Dec 11 09:18:42 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Reconfiguring daemon grafana.compute-0 on compute-0
Dec 11 09:18:42 compute-0 sudo[101781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:18:42 compute-0 sudo[101781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:42 compute-0 sudo[101781]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:42 compute-0 sudo[101806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/grafana:10.4.0 --timeout 895 _orch deploy --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:18:42 compute-0 sudo[101806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:42 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:42 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef70004480 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:42 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:18:42 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:42 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:18:42.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:43 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:18:43 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:43 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:18:42.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:43 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v25: 353 pgs: 1 peering, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 170 B/s wr, 0 op/s; 0 B/s, 0 objects/s recovering
Dec 11 09:18:43 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:43 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef60003490 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:43 compute-0 podman[101847]: 2025-12-11 09:18:43.163914991 +0000 UTC m=+0.052225901 container create b2843ca21a9357dd6b5a57b718d8016ac2f4ae530dffab98dce33f4c4024312d (image=quay.io/ceph/grafana:10.4.0, name=thirsty_villani, maintainer=Grafana Labs <hello@grafana.com>)
Dec 11 09:18:43 compute-0 systemd[1]: Started libpod-conmon-b2843ca21a9357dd6b5a57b718d8016ac2f4ae530dffab98dce33f4c4024312d.scope.
Dec 11 09:18:43 compute-0 podman[101847]: 2025-12-11 09:18:43.144169262 +0000 UTC m=+0.032480212 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Dec 11 09:18:43 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:18:43 compute-0 podman[101847]: 2025-12-11 09:18:43.267118303 +0000 UTC m=+0.155429233 container init b2843ca21a9357dd6b5a57b718d8016ac2f4ae530dffab98dce33f4c4024312d (image=quay.io/ceph/grafana:10.4.0, name=thirsty_villani, maintainer=Grafana Labs <hello@grafana.com>)
Dec 11 09:18:43 compute-0 podman[101847]: 2025-12-11 09:18:43.275927715 +0000 UTC m=+0.164238625 container start b2843ca21a9357dd6b5a57b718d8016ac2f4ae530dffab98dce33f4c4024312d (image=quay.io/ceph/grafana:10.4.0, name=thirsty_villani, maintainer=Grafana Labs <hello@grafana.com>)
Dec 11 09:18:43 compute-0 podman[101847]: 2025-12-11 09:18:43.280071342 +0000 UTC m=+0.168382242 container attach b2843ca21a9357dd6b5a57b718d8016ac2f4ae530dffab98dce33f4c4024312d (image=quay.io/ceph/grafana:10.4.0, name=thirsty_villani, maintainer=Grafana Labs <hello@grafana.com>)
Dec 11 09:18:43 compute-0 thirsty_villani[101864]: 472 0
Dec 11 09:18:43 compute-0 systemd[1]: libpod-b2843ca21a9357dd6b5a57b718d8016ac2f4ae530dffab98dce33f4c4024312d.scope: Deactivated successfully.
Dec 11 09:18:43 compute-0 conmon[101864]: conmon b2843ca21a9357dd6b5a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b2843ca21a9357dd6b5a57b718d8016ac2f4ae530dffab98dce33f4c4024312d.scope/container/memory.events
Dec 11 09:18:43 compute-0 podman[101847]: 2025-12-11 09:18:43.284466868 +0000 UTC m=+0.172777778 container died b2843ca21a9357dd6b5a57b718d8016ac2f4ae530dffab98dce33f4c4024312d (image=quay.io/ceph/grafana:10.4.0, name=thirsty_villani, maintainer=Grafana Labs <hello@grafana.com>)
Dec 11 09:18:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-46fab1d6c0230e019a2210232a3283fe3f0648923e754df08aa6fc5157d8bada-merged.mount: Deactivated successfully.
Dec 11 09:18:43 compute-0 podman[101847]: 2025-12-11 09:18:43.336178342 +0000 UTC m=+0.224489252 container remove b2843ca21a9357dd6b5a57b718d8016ac2f4ae530dffab98dce33f4c4024312d (image=quay.io/ceph/grafana:10.4.0, name=thirsty_villani, maintainer=Grafana Labs <hello@grafana.com>)
Dec 11 09:18:43 compute-0 systemd[1]: libpod-conmon-b2843ca21a9357dd6b5a57b718d8016ac2f4ae530dffab98dce33f4c4024312d.scope: Deactivated successfully.
Dec 11 09:18:43 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:43 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef50003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:43 compute-0 podman[101880]: 2025-12-11 09:18:43.416512249 +0000 UTC m=+0.052800669 container create 0fd308b106c94b2c5fbf2e37133e0d82d86593c7e312df08539cc4a93eb2bfb7 (image=quay.io/ceph/grafana:10.4.0, name=affectionate_dijkstra, maintainer=Grafana Labs <hello@grafana.com>)
Dec 11 09:18:43 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:43 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:43 compute-0 ceph-mon[74426]: Reconfiguring grafana.compute-0 (dependencies changed)...
Dec 11 09:18:43 compute-0 ceph-mon[74426]: Reconfiguring daemon grafana.compute-0 on compute-0
Dec 11 09:18:43 compute-0 ceph-mon[74426]: pgmap v25: 353 pgs: 1 peering, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 170 B/s wr, 0 op/s; 0 B/s, 0 objects/s recovering
Dec 11 09:18:43 compute-0 systemd[1]: Started libpod-conmon-0fd308b106c94b2c5fbf2e37133e0d82d86593c7e312df08539cc4a93eb2bfb7.scope.
Dec 11 09:18:43 compute-0 podman[101880]: 2025-12-11 09:18:43.392641533 +0000 UTC m=+0.028929973 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Dec 11 09:18:43 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:18:43 compute-0 podman[101880]: 2025-12-11 09:18:43.501279982 +0000 UTC m=+0.137568422 container init 0fd308b106c94b2c5fbf2e37133e0d82d86593c7e312df08539cc4a93eb2bfb7 (image=quay.io/ceph/grafana:10.4.0, name=affectionate_dijkstra, maintainer=Grafana Labs <hello@grafana.com>)
Dec 11 09:18:43 compute-0 podman[101880]: 2025-12-11 09:18:43.50737273 +0000 UTC m=+0.143661150 container start 0fd308b106c94b2c5fbf2e37133e0d82d86593c7e312df08539cc4a93eb2bfb7 (image=quay.io/ceph/grafana:10.4.0, name=affectionate_dijkstra, maintainer=Grafana Labs <hello@grafana.com>)
Dec 11 09:18:43 compute-0 affectionate_dijkstra[101896]: 472 0
Dec 11 09:18:43 compute-0 systemd[1]: libpod-0fd308b106c94b2c5fbf2e37133e0d82d86593c7e312df08539cc4a93eb2bfb7.scope: Deactivated successfully.
Dec 11 09:18:43 compute-0 podman[101880]: 2025-12-11 09:18:43.51029537 +0000 UTC m=+0.146583790 container attach 0fd308b106c94b2c5fbf2e37133e0d82d86593c7e312df08539cc4a93eb2bfb7 (image=quay.io/ceph/grafana:10.4.0, name=affectionate_dijkstra, maintainer=Grafana Labs <hello@grafana.com>)
Dec 11 09:18:43 compute-0 podman[101880]: 2025-12-11 09:18:43.510826647 +0000 UTC m=+0.147115067 container died 0fd308b106c94b2c5fbf2e37133e0d82d86593c7e312df08539cc4a93eb2bfb7 (image=quay.io/ceph/grafana:10.4.0, name=affectionate_dijkstra, maintainer=Grafana Labs <hello@grafana.com>)
Dec 11 09:18:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e6acfd5dfc96fe655152f0950b1e8c5c5717e48d705a1d49616709242235b4d-merged.mount: Deactivated successfully.
Dec 11 09:18:43 compute-0 podman[101880]: 2025-12-11 09:18:43.544049071 +0000 UTC m=+0.180337481 container remove 0fd308b106c94b2c5fbf2e37133e0d82d86593c7e312df08539cc4a93eb2bfb7 (image=quay.io/ceph/grafana:10.4.0, name=affectionate_dijkstra, maintainer=Grafana Labs <hello@grafana.com>)
Dec 11 09:18:43 compute-0 systemd[1]: libpod-conmon-0fd308b106c94b2c5fbf2e37133e0d82d86593c7e312df08539cc4a93eb2bfb7.scope: Deactivated successfully.
Dec 11 09:18:43 compute-0 systemd[1]: Stopping Ceph grafana.compute-0 for 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060...
Dec 11 09:18:43 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=server t=2025-12-11T09:18:43.822514207Z level=info msg="Shutdown started" reason="System signal: terminated"
Dec 11 09:18:43 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=grafana-apiserver t=2025-12-11T09:18:43.823635391Z level=info msg="StorageObjectCountTracker pruner is exiting"
Dec 11 09:18:43 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=tracing t=2025-12-11T09:18:43.82394382Z level=info msg="Closing tracing"
Dec 11 09:18:43 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=ticker t=2025-12-11T09:18:43.824049443Z level=info msg=stopped last_tick=2025-12-11T09:18:40Z
Dec 11 09:18:43 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[96953]: logger=sqlstore.transactions t=2025-12-11T09:18:43.837491228Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Dec 11 09:18:43 compute-0 podman[101943]: 2025-12-11 09:18:43.853409279 +0000 UTC m=+0.070062971 container died 3ec8d07fc34e7ed8ee0d4326c60dde172854d4253d34e863a630ec7b70332f71 (image=quay.io/ceph/grafana:10.4.0, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 11 09:18:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-b95a6393647c0ebf89aaca9c372d34aa00c35f8f47a1e8d537be1f14ced730f1-merged.mount: Deactivated successfully.
Dec 11 09:18:43 compute-0 podman[101943]: 2025-12-11 09:18:43.903528344 +0000 UTC m=+0.120182016 container remove 3ec8d07fc34e7ed8ee0d4326c60dde172854d4253d34e863a630ec7b70332f71 (image=quay.io/ceph/grafana:10.4.0, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 11 09:18:43 compute-0 bash[101943]: ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0
Dec 11 09:18:44 compute-0 systemd[1]: ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060@grafana.compute-0.service: Deactivated successfully.
Dec 11 09:18:44 compute-0 systemd[1]: Stopped Ceph grafana.compute-0 for 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060.
Dec 11 09:18:44 compute-0 systemd[1]: ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060@grafana.compute-0.service: Consumed 4.425s CPU time.
Dec 11 09:18:44 compute-0 systemd[1]: Starting Ceph grafana.compute-0 for 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060...
Dec 11 09:18:44 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:44 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 11 09:18:44 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:44 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 11 09:18:44 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:44 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 11 09:18:44 compute-0 podman[102043]: 2025-12-11 09:18:44.287160642 +0000 UTC m=+0.049960891 container create ac5286c4a64ed2433117d537e938dca0aaa6db8529e224a2e9020f17a5de7e45 (image=quay.io/ceph/grafana:10.4.0, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 11 09:18:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e9a2a4777e8a78eb64fa179c693369a3bb225dae2c5f63c2761bcf06ff5e946/merged/etc/grafana/grafana.ini supports timestamps until 2038 (0x7fffffff)
Dec 11 09:18:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e9a2a4777e8a78eb64fa179c693369a3bb225dae2c5f63c2761bcf06ff5e946/merged/etc/grafana/certs supports timestamps until 2038 (0x7fffffff)
Dec 11 09:18:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e9a2a4777e8a78eb64fa179c693369a3bb225dae2c5f63c2761bcf06ff5e946/merged/etc/grafana/provisioning/datasources supports timestamps until 2038 (0x7fffffff)
Dec 11 09:18:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e9a2a4777e8a78eb64fa179c693369a3bb225dae2c5f63c2761bcf06ff5e946/merged/etc/grafana/provisioning/dashboards supports timestamps until 2038 (0x7fffffff)
Dec 11 09:18:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e9a2a4777e8a78eb64fa179c693369a3bb225dae2c5f63c2761bcf06ff5e946/merged/var/lib/grafana/grafana.db supports timestamps until 2038 (0x7fffffff)
Dec 11 09:18:44 compute-0 podman[102043]: 2025-12-11 09:18:44.339387752 +0000 UTC m=+0.102188021 container init ac5286c4a64ed2433117d537e938dca0aaa6db8529e224a2e9020f17a5de7e45 (image=quay.io/ceph/grafana:10.4.0, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 11 09:18:44 compute-0 podman[102043]: 2025-12-11 09:18:44.345339176 +0000 UTC m=+0.108139425 container start ac5286c4a64ed2433117d537e938dca0aaa6db8529e224a2e9020f17a5de7e45 (image=quay.io/ceph/grafana:10.4.0, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 11 09:18:44 compute-0 bash[102043]: ac5286c4a64ed2433117d537e938dca0aaa6db8529e224a2e9020f17a5de7e45
Dec 11 09:18:44 compute-0 podman[102043]: 2025-12-11 09:18:44.266878236 +0000 UTC m=+0.029678505 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Dec 11 09:18:44 compute-0 systemd[1]: Started Ceph grafana.compute-0 for 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060.
Dec 11 09:18:44 compute-0 sudo[101806]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:44 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-alertmanager-compute-0[101760]: ts=2025-12-11T09:18:44.400Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000232329s
Dec 11 09:18:44 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 11 09:18:44 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:44 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 11 09:18:44 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:44 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-1 (monmap changed)...
Dec 11 09:18:44 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-1 (monmap changed)...
Dec 11 09:18:44 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec 11 09:18:44 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 11 09:18:44 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: ::ffff:192.168.122.100 - - [11/Dec/2025:09:18:44] "GET /metrics HTTP/1.1" 200 48233 "" "Prometheus/2.51.0"
Dec 11 09:18:44 compute-0 ceph-mgr[74715]: [prometheus INFO cherrypy.access.139680037156848] ::ffff:192.168.122.100 - - [11/Dec/2025:09:18:44] "GET /metrics HTTP/1.1" 200 48233 "" "Prometheus/2.51.0"
Dec 11 09:18:44 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 11 09:18:44 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:18:44 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-1 on compute-1
Dec 11 09:18:44 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-1 on compute-1
Dec 11 09:18:44 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[102058]: logger=settings t=2025-12-11T09:18:44.565868655Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2025-12-11T09:18:44Z
Dec 11 09:18:44 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[102058]: logger=settings t=2025-12-11T09:18:44.566245186Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini
Dec 11 09:18:44 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[102058]: logger=settings t=2025-12-11T09:18:44.566255097Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini
Dec 11 09:18:44 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[102058]: logger=settings t=2025-12-11T09:18:44.566259077Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana"
Dec 11 09:18:44 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[102058]: logger=settings t=2025-12-11T09:18:44.566262637Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana"
Dec 11 09:18:44 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[102058]: logger=settings t=2025-12-11T09:18:44.566272997Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins"
Dec 11 09:18:44 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[102058]: logger=settings t=2025-12-11T09:18:44.566276337Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning"
Dec 11 09:18:44 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[102058]: logger=settings t=2025-12-11T09:18:44.566279737Z level=info msg="Config overridden from command line" arg="default.log.mode=console"
Dec 11 09:18:44 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[102058]: logger=settings t=2025-12-11T09:18:44.566283608Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana"
Dec 11 09:18:44 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[102058]: logger=settings t=2025-12-11T09:18:44.566287048Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana"
Dec 11 09:18:44 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[102058]: logger=settings t=2025-12-11T09:18:44.566290208Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins"
Dec 11 09:18:44 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[102058]: logger=settings t=2025-12-11T09:18:44.566293458Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning"
Dec 11 09:18:44 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[102058]: logger=settings t=2025-12-11T09:18:44.566296858Z level=info msg=Target target=[all]
Dec 11 09:18:44 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[102058]: logger=settings t=2025-12-11T09:18:44.566303338Z level=info msg="Path Home" path=/usr/share/grafana
Dec 11 09:18:44 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[102058]: logger=settings t=2025-12-11T09:18:44.566331669Z level=info msg="Path Data" path=/var/lib/grafana
Dec 11 09:18:44 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[102058]: logger=settings t=2025-12-11T09:18:44.566339839Z level=info msg="Path Logs" path=/var/log/grafana
Dec 11 09:18:44 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[102058]: logger=settings t=2025-12-11T09:18:44.56635233Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins
Dec 11 09:18:44 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[102058]: logger=settings t=2025-12-11T09:18:44.56635655Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning
Dec 11 09:18:44 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[102058]: logger=settings t=2025-12-11T09:18:44.56635991Z level=info msg="App mode production"
Dec 11 09:18:44 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[102058]: logger=sqlstore t=2025-12-11T09:18:44.566773663Z level=info msg="Connecting to DB" dbtype=sqlite3
Dec 11 09:18:44 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[102058]: logger=sqlstore t=2025-12-11T09:18:44.566798494Z level=warn msg="SQLite database file has broader permissions than it should" path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r-----
Dec 11 09:18:44 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[102058]: logger=migrator t=2025-12-11T09:18:44.567859886Z level=info msg="Starting DB migrations"
Dec 11 09:18:44 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[102058]: logger=migrator t=2025-12-11T09:18:44.590778302Z level=info msg="migrations completed" performed=0 skipped=547 duration=1.596818ms
Dec 11 09:18:44 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[102058]: logger=sqlstore t=2025-12-11T09:18:44.592357702Z level=info msg="Created default organization"
Dec 11 09:18:44 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[102058]: logger=secrets t=2025-12-11T09:18:44.593709793Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1
Dec 11 09:18:44 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[102058]: logger=plugin.store t=2025-12-11T09:18:44.613877445Z level=info msg="Loading plugins..."
Dec 11 09:18:44 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[102058]: logger=local.finder t=2025-12-11T09:18:44.708954206Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled
Dec 11 09:18:44 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[102058]: logger=plugin.store t=2025-12-11T09:18:44.709002808Z level=info msg="Plugins loaded" count=55 duration=95.125753ms
Dec 11 09:18:44 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[102058]: logger=query_data t=2025-12-11T09:18:44.713851447Z level=info msg="Query Service initialization"
Dec 11 09:18:44 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[102058]: logger=live.push_http t=2025-12-11T09:18:44.719572993Z level=info msg="Live Push Gateway initialization"
Dec 11 09:18:44 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[102058]: logger=ngalert.migration t=2025-12-11T09:18:44.722780662Z level=info msg=Starting
Dec 11 09:18:44 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[102058]: logger=ngalert.state.manager t=2025-12-11T09:18:44.737686512Z level=info msg="Running in alternative execution of Error/NoData mode"
Dec 11 09:18:44 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[102058]: logger=infra.usagestats.collector t=2025-12-11T09:18:44.740483088Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2
Dec 11 09:18:44 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[102058]: logger=provisioning.datasources t=2025-12-11T09:18:44.743001586Z level=info msg="inserting datasource from configuration" name=Dashboard1 uid=P43CA22E17D0F9596
Dec 11 09:18:44 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[102058]: logger=provisioning.alerting t=2025-12-11T09:18:44.769296716Z level=info msg="starting to provision alerting"
Dec 11 09:18:44 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[102058]: logger=provisioning.alerting t=2025-12-11T09:18:44.769401499Z level=info msg="finished to provision alerting"
Dec 11 09:18:44 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[102058]: logger=ngalert.state.manager t=2025-12-11T09:18:44.769642417Z level=info msg="Warming state cache for startup"
Dec 11 09:18:44 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:44 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef4c0036c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:44 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[102058]: logger=ngalert.multiorg.alertmanager t=2025-12-11T09:18:44.788708156Z level=info msg="Starting MultiOrg Alertmanager"
Dec 11 09:18:44 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[102058]: logger=grafanaStorageLogger t=2025-12-11T09:18:44.789018414Z level=info msg="Storage starting"
Dec 11 09:18:44 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[102058]: logger=http.server t=2025-12-11T09:18:44.794445382Z level=info msg="HTTP Server TLS settings" MinTLSVersion=TLS1.2 configuredciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
Dec 11 09:18:44 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[102058]: logger=http.server t=2025-12-11T09:18:44.79504645Z level=info msg="HTTP Server Listen" address=192.168.122.100:3000 protocol=https subUrl= socket=
Dec 11 09:18:44 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[102058]: logger=ngalert.state.manager t=2025-12-11T09:18:44.814777119Z level=info msg="State cache has been initialized" states=0 duration=45.120921ms
Dec 11 09:18:44 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[102058]: logger=ngalert.scheduler t=2025-12-11T09:18:44.81514761Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1
Dec 11 09:18:44 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[102058]: logger=ticker t=2025-12-11T09:18:44.815427069Z level=info msg=starting first_tick=2025-12-11T09:18:50Z
Dec 11 09:18:44 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[102058]: logger=provisioning.dashboard t=2025-12-11T09:18:44.819647459Z level=info msg="starting to provision dashboards"
Dec 11 09:18:44 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[102058]: logger=provisioning.dashboard t=2025-12-11T09:18:44.83688673Z level=info msg="finished to provision dashboards"
Dec 11 09:18:44 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[102058]: logger=plugins.update.checker t=2025-12-11T09:18:44.857378772Z level=info msg="Update check succeeded" duration=87.743955ms
Dec 11 09:18:44 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:18:44 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000031s ======
Dec 11 09:18:44 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:18:44.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 11 09:18:44 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[102058]: logger=grafana.update.checker t=2025-12-11T09:18:44.994913873Z level=info msg="Update check succeeded" duration=225.329718ms
Dec 11 09:18:45 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:18:45 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:45 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:18:45.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:45 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v26: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 924 B/s wr, 3 op/s; 0 B/s, 0 objects/s recovering
Dec 11 09:18:45 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0)
Dec 11 09:18:45 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Dec 11 09:18:45 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:45 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef700044a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:45 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 11 09:18:45 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:45 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 11 09:18:45 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:45 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)...
Dec 11 09:18:45 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)...
Dec 11 09:18:45 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Dec 11 09:18:45 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Dec 11 09:18:45 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 11 09:18:45 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:18:45 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on compute-1
Dec 11 09:18:45 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on compute-1
Dec 11 09:18:45 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:45 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 11 09:18:45 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:45 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef60003490 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:45 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:45 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:45 compute-0 ceph-mon[74426]: Reconfiguring crash.compute-1 (monmap changed)...
Dec 11 09:18:45 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 11 09:18:45 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:18:45 compute-0 ceph-mon[74426]: Reconfiguring daemon crash.compute-1 on compute-1
Dec 11 09:18:45 compute-0 ceph-mon[74426]: pgmap v26: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 924 B/s wr, 3 op/s; 0 B/s, 0 objects/s recovering
Dec 11 09:18:45 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Dec 11 09:18:45 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:45 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:45 compute-0 ceph-mon[74426]: Reconfiguring osd.0 (monmap changed)...
Dec 11 09:18:45 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Dec 11 09:18:45 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:18:45 compute-0 ceph-mon[74426]: Reconfiguring daemon osd.0 on compute-1
Dec 11 09:18:45 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Dec 11 09:18:45 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Dec 11 09:18:45 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Dec 11 09:18:45 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Dec 11 09:18:45 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[102058]: logger=grafana-apiserver t=2025-12-11T09:18:45.496545589Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager"
Dec 11 09:18:45 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[102058]: logger=grafana-apiserver t=2025-12-11T09:18:45.496973652Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager"
Dec 11 09:18:45 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 11 09:18:45 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:45 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 11 09:18:45 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:45 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-1 (monmap changed)...
Dec 11 09:18:45 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-1 (monmap changed)...
Dec 11 09:18:45 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec 11 09:18:45 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 11 09:18:45 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec 11 09:18:45 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 11 09:18:45 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 11 09:18:45 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:18:45 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-1 on compute-1
Dec 11 09:18:45 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-1 on compute-1
Dec 11 09:18:46 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:18:46 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Dec 11 09:18:46 compute-0 ceph-mon[74426]: osdmap e126: 3 total, 3 up, 3 in
Dec 11 09:18:46 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:46 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:46 compute-0 ceph-mon[74426]: Reconfiguring mon.compute-1 (monmap changed)...
Dec 11 09:18:46 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 11 09:18:46 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 11 09:18:46 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:18:46 compute-0 ceph-mon[74426]: Reconfiguring daemon mon.compute-1 on compute-1
Dec 11 09:18:46 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 11 09:18:46 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:46 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 11 09:18:46 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:46 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-2 (monmap changed)...
Dec 11 09:18:46 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-2 (monmap changed)...
Dec 11 09:18:46 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec 11 09:18:46 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 11 09:18:46 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec 11 09:18:46 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 11 09:18:46 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 11 09:18:46 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:18:46 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-2 on compute-2
Dec 11 09:18:46 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-2 on compute-2
Dec 11 09:18:46 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:46 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef50003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:46 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:18:46 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:46 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:18:46.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:47 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:18:47 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:47 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:18:47.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:47 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v28: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 895 B/s wr, 2 op/s; 0 B/s, 0 objects/s recovering
Dec 11 09:18:47 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0)
Dec 11 09:18:47 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Dec 11 09:18:47 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:47 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef4c0036c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:47 compute-0 sudo[102115]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owzyeoyaeoabcefouhltsmhfnlcuekem ; /usr/bin/python3'
Dec 11 09:18:47 compute-0 sudo[102115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:18:47 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 11 09:18:47 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:47 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef60003490 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:47 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:47 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 11 09:18:47 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:47 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-2.uiimcn (monmap changed)...
Dec 11 09:18:47 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-2.uiimcn (monmap changed)...
Dec 11 09:18:47 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.uiimcn", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec 11 09:18:47 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.uiimcn", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 11 09:18:47 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 11 09:18:47 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 11 09:18:47 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 11 09:18:47 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:18:47 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-2.uiimcn on compute-2
Dec 11 09:18:47 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-2.uiimcn on compute-2
Dec 11 09:18:47 compute-0 python3[102117]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps --daemon-type rgw --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:18:47 compute-0 podman[102118]: 2025-12-11 09:18:47.574395401 +0000 UTC m=+0.055794681 container create 05535fb9d78c85a6ba1a07c9f6a68535df21c5a3120172381ee9af2be8d5a558 (image=quay.io/ceph/ceph:v19, name=kind_neumann, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Dec 11 09:18:47 compute-0 systemd[1]: Started libpod-conmon-05535fb9d78c85a6ba1a07c9f6a68535df21c5a3120172381ee9af2be8d5a558.scope.
Dec 11 09:18:47 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:47 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:47 compute-0 ceph-mon[74426]: Reconfiguring mon.compute-2 (monmap changed)...
Dec 11 09:18:47 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 11 09:18:47 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 11 09:18:47 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:18:47 compute-0 ceph-mon[74426]: Reconfiguring daemon mon.compute-2 on compute-2
Dec 11 09:18:47 compute-0 ceph-mon[74426]: pgmap v28: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 895 B/s wr, 2 op/s; 0 B/s, 0 objects/s recovering
Dec 11 09:18:47 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Dec 11 09:18:47 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:47 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:47 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.uiimcn", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 11 09:18:47 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 11 09:18:47 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:18:47 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Dec 11 09:18:47 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Dec 11 09:18:47 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Dec 11 09:18:47 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Dec 11 09:18:47 compute-0 podman[102118]: 2025-12-11 09:18:47.55523678 +0000 UTC m=+0.036636080 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:18:47 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:18:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0ea63cf99b1eb21a7149559a4a295f63b968245e616fd47ff9c545766d94b82/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:18:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0ea63cf99b1eb21a7149559a4a295f63b968245e616fd47ff9c545766d94b82/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:18:47 compute-0 podman[102118]: 2025-12-11 09:18:47.67068577 +0000 UTC m=+0.152085070 container init 05535fb9d78c85a6ba1a07c9f6a68535df21c5a3120172381ee9af2be8d5a558 (image=quay.io/ceph/ceph:v19, name=kind_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:18:47 compute-0 podman[102118]: 2025-12-11 09:18:47.677635215 +0000 UTC m=+0.159034495 container start 05535fb9d78c85a6ba1a07c9f6a68535df21c5a3120172381ee9af2be8d5a558 (image=quay.io/ceph/ceph:v19, name=kind_neumann, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid)
Dec 11 09:18:47 compute-0 podman[102118]: 2025-12-11 09:18:47.68109015 +0000 UTC m=+0.162489460 container attach 05535fb9d78c85a6ba1a07c9f6a68535df21c5a3120172381ee9af2be8d5a558 (image=quay.io/ceph/ceph:v19, name=kind_neumann, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:18:47 compute-0 kind_neumann[102134]: ERROR: invalid flag --daemon-type
Dec 11 09:18:47 compute-0 systemd[1]: libpod-05535fb9d78c85a6ba1a07c9f6a68535df21c5a3120172381ee9af2be8d5a558.scope: Deactivated successfully.
Dec 11 09:18:47 compute-0 podman[102118]: 2025-12-11 09:18:47.731199815 +0000 UTC m=+0.212599095 container died 05535fb9d78c85a6ba1a07c9f6a68535df21c5a3120172381ee9af2be8d5a558 (image=quay.io/ceph/ceph:v19, name=kind_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:18:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-d0ea63cf99b1eb21a7149559a4a295f63b968245e616fd47ff9c545766d94b82-merged.mount: Deactivated successfully.
Dec 11 09:18:47 compute-0 podman[102118]: 2025-12-11 09:18:47.779395261 +0000 UTC m=+0.260794541 container remove 05535fb9d78c85a6ba1a07c9f6a68535df21c5a3120172381ee9af2be8d5a558 (image=quay.io/ceph/ceph:v19, name=kind_neumann, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:18:47 compute-0 systemd[1]: libpod-conmon-05535fb9d78c85a6ba1a07c9f6a68535df21c5a3120172381ee9af2be8d5a558.scope: Deactivated successfully.
Dec 11 09:18:47 compute-0 sudo[102115]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:48 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 11 09:18:48 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:48 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 11 09:18:48 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:48 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Reconfiguring osd.2 (unknown last config time)...
Dec 11 09:18:48 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Reconfiguring osd.2 (unknown last config time)...
Dec 11 09:18:48 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Dec 11 09:18:48 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Dec 11 09:18:48 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 11 09:18:48 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:18:48 compute-0 ceph-mgr[74715]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.2 on compute-2
Dec 11 09:18:48 compute-0 ceph-mgr[74715]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.2 on compute-2
Dec 11 09:18:48 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:48 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 11 09:18:48 compute-0 ceph-mon[74426]: Reconfiguring mgr.compute-2.uiimcn (monmap changed)...
Dec 11 09:18:48 compute-0 ceph-mon[74426]: Reconfiguring daemon mgr.compute-2.uiimcn on compute-2
Dec 11 09:18:48 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Dec 11 09:18:48 compute-0 ceph-mon[74426]: osdmap e127: 3 total, 3 up, 3 in
Dec 11 09:18:48 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:48 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:48 compute-0 ceph-mon[74426]: Reconfiguring osd.2 (unknown last config time)...
Dec 11 09:18:48 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Dec 11 09:18:48 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:18:48 compute-0 ceph-mon[74426]: Reconfiguring daemon osd.2 on compute-2
Dec 11 09:18:48 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:48 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef700044c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:48 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:18:48 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:48 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:18:48.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:49 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:18:49 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:49 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:18:49.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:49 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v30: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 767 B/s wr, 2 op/s
Dec 11 09:18:49 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0)
Dec 11 09:18:49 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Dec 11 09:18:49 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 11 09:18:49 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:49 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 11 09:18:49 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:49 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef50003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:49 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:49 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard get-alertmanager-api-host"} v 0)
Dec 11 09:18:49 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Dec 11 09:18:49 compute-0 ceph-mgr[74715]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Dec 11 09:18:49 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard get-grafana-api-url"} v 0)
Dec 11 09:18:49 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Dec 11 09:18:49 compute-0 ceph-mgr[74715]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Dec 11 09:18:49 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"} v 0)
Dec 11 09:18:49 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Dec 11 09:18:49 compute-0 ceph-mgr[74715]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Dec 11 09:18:49 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_URL}] v 0)
Dec 11 09:18:49 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:49 compute-0 ceph-mgr[74715]: [prometheus INFO root] Restarting engine...
Dec 11 09:18:49 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: [11/Dec/2025:09:18:49] ENGINE Bus STOPPING
Dec 11 09:18:49 compute-0 ceph-mgr[74715]: [prometheus INFO cherrypy.error] [11/Dec/2025:09:18:49] ENGINE Bus STOPPING
Dec 11 09:18:49 compute-0 sudo[102168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:18:49 compute-0 sudo[102168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:49 compute-0 sudo[102168]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:49 compute-0 sudo[102193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Dec 11 09:18:49 compute-0 sudo[102193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:49 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:49 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef4c0036c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:49 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: [11/Dec/2025:09:18:49] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down
Dec 11 09:18:49 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: [11/Dec/2025:09:18:49] ENGINE Bus STOPPED
Dec 11 09:18:49 compute-0 ceph-mgr[74715]: [prometheus INFO cherrypy.error] [11/Dec/2025:09:18:49] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down
Dec 11 09:18:49 compute-0 ceph-mgr[74715]: [prometheus INFO cherrypy.error] [11/Dec/2025:09:18:49] ENGINE Bus STOPPED
Dec 11 09:18:49 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: [11/Dec/2025:09:18:49] ENGINE Bus STARTING
Dec 11 09:18:49 compute-0 ceph-mgr[74715]: [prometheus INFO cherrypy.error] [11/Dec/2025:09:18:49] ENGINE Bus STARTING
Dec 11 09:18:49 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: [11/Dec/2025:09:18:49] ENGINE Serving on http://:::9283
Dec 11 09:18:49 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: [11/Dec/2025:09:18:49] ENGINE Bus STARTED
Dec 11 09:18:49 compute-0 ceph-mgr[74715]: [prometheus INFO cherrypy.error] [11/Dec/2025:09:18:49] ENGINE Serving on http://:::9283
Dec 11 09:18:49 compute-0 ceph-mgr[74715]: [prometheus INFO cherrypy.error] [11/Dec/2025:09:18:49] ENGINE Bus STARTED
Dec 11 09:18:49 compute-0 ceph-mgr[74715]: [prometheus INFO root] Engine started.
Dec 11 09:18:49 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Dec 11 09:18:49 compute-0 ceph-mon[74426]: pgmap v30: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 767 B/s wr, 2 op/s
Dec 11 09:18:49 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Dec 11 09:18:49 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:49 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:49 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Dec 11 09:18:49 compute-0 ceph-mon[74426]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Dec 11 09:18:49 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Dec 11 09:18:49 compute-0 ceph-mon[74426]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Dec 11 09:18:49 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Dec 11 09:18:49 compute-0 ceph-mon[74426]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Dec 11 09:18:49 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:49 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Dec 11 09:18:49 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Dec 11 09:18:49 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Dec 11 09:18:49 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 128 pg[10.19( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=91/91 les/c/f=92/92/0 sis=128) [1] r=0 lpr=128 pi=[91,128)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:18:49 compute-0 podman[102303]: 2025-12-11 09:18:49.955130951 +0000 UTC m=+0.067260925 container exec 9c08b5b1828392d2cc014af7ec9c415c9c915d4b6dd9798da9403e6929851bd4 (image=quay.io/ceph/ceph:v19, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:18:50 compute-0 podman[102303]: 2025-12-11 09:18:50.065946457 +0000 UTC m=+0.178076401 container exec_died 9c08b5b1828392d2cc014af7ec9c415c9c915d4b6dd9798da9403e6929851bd4 (image=quay.io/ceph/ceph:v19, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:18:50 compute-0 podman[102421]: 2025-12-11 09:18:50.603008526 +0000 UTC m=+0.061806836 container exec 71c011c91ff6298c4d680d36893ef7d2b662adac0c087e7fdac0eacd23df3a9f (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 09:18:50 compute-0 podman[102421]: 2025-12-11 09:18:50.61190263 +0000 UTC m=+0.070700910 container exec_died 71c011c91ff6298c4d680d36893ef7d2b662adac0c087e7fdac0eacd23df3a9f (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 09:18:50 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Dec 11 09:18:50 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Dec 11 09:18:50 compute-0 ceph-mon[74426]: osdmap e128: 3 total, 3 up, 3 in
Dec 11 09:18:50 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Dec 11 09:18:50 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Dec 11 09:18:50 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 129 pg[10.19( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=91/91 les/c/f=92/92/0 sis=129) [1]/[0] r=-1 lpr=129 pi=[91,129)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:18:50 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 129 pg[10.19( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=91/91 les/c/f=92/92/0 sis=129) [1]/[0] r=-1 lpr=129 pi=[91,129)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 11 09:18:50 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:50 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef60003490 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:50 compute-0 podman[102516]: 2025-12-11 09:18:50.981292599 +0000 UTC m=+0.070665630 container exec b054262215ee01c3adca2deaedf76efed99f7a0a45370b1e29ea68ac174c96fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 11 09:18:50 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:18:50 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:50 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:18:50.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:50 compute-0 podman[102516]: 2025-12-11 09:18:50.994819616 +0000 UTC m=+0.084192677 container exec_died b054262215ee01c3adca2deaedf76efed99f7a0a45370b1e29ea68ac174c96fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:18:51 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:18:51 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000031s ======
Dec 11 09:18:51 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:18:51.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 11 09:18:51 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v33: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Dec 11 09:18:51 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0)
Dec 11 09:18:51 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Dec 11 09:18:51 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:51 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef700044e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:51 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:18:51 compute-0 podman[102582]: 2025-12-11 09:18:51.24986435 +0000 UTC m=+0.068730931 container exec 67bfd387bd504b410155709fb8f34d841ddb874b3571550320b1a27fbc3dea08 (image=quay.io/ceph/haproxy:2.3, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-haproxy-nfs-cephfs-compute-0-qtoxfz)
Dec 11 09:18:51 compute-0 podman[102582]: 2025-12-11 09:18:51.288239023 +0000 UTC m=+0.107105604 container exec_died 67bfd387bd504b410155709fb8f34d841ddb874b3571550320b1a27fbc3dea08 (image=quay.io/ceph/haproxy:2.3, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-haproxy-nfs-cephfs-compute-0-qtoxfz)
Dec 11 09:18:51 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:51 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef540014d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:51 compute-0 podman[102649]: 2025-12-11 09:18:51.521757612 +0000 UTC m=+0.059807675 container exec 2552e27573042b86043f6eb7d85be1b045b99c64a99b48fb64027534fb48d8b4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-keepalived-nfs-cephfs-compute-0-ewssxv, name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20)
Dec 11 09:18:51 compute-0 podman[102649]: 2025-12-11 09:18:51.539622283 +0000 UTC m=+0.077672326 container exec_died 2552e27573042b86043f6eb7d85be1b045b99c64a99b48fb64027534fb48d8b4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-keepalived-nfs-cephfs-compute-0-ewssxv, version=2.2.4, release=1793, description=keepalived for Ceph, distribution-scope=public, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, architecture=x86_64, build-date=2023-02-22T09:23:20, name=keepalived, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9)
Dec 11 09:18:51 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Dec 11 09:18:51 compute-0 ceph-mon[74426]: osdmap e129: 3 total, 3 up, 3 in
Dec 11 09:18:51 compute-0 ceph-mon[74426]: pgmap v33: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Dec 11 09:18:51 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Dec 11 09:18:51 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Dec 11 09:18:51 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Dec 11 09:18:51 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Dec 11 09:18:51 compute-0 podman[102715]: 2025-12-11 09:18:51.763446883 +0000 UTC m=+0.052760987 container exec 3b9e0073af5e4c7c2b471c925a825b813ba9ef3bc5645cf6b5c090ee432c8a5d (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 09:18:51 compute-0 podman[102715]: 2025-12-11 09:18:51.796969817 +0000 UTC m=+0.086283901 container exec_died 3b9e0073af5e4c7c2b471c925a825b813ba9ef3bc5645cf6b5c090ee432c8a5d (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 09:18:52 compute-0 podman[102789]: 2025-12-11 09:18:52.054385843 +0000 UTC m=+0.075352104 container exec ac5286c4a64ed2433117d537e938dca0aaa6db8529e224a2e9020f17a5de7e45 (image=quay.io/ceph/grafana:10.4.0, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 11 09:18:52 compute-0 podman[102789]: 2025-12-11 09:18:52.254834664 +0000 UTC m=+0.275800895 container exec_died ac5286c4a64ed2433117d537e938dca0aaa6db8529e224a2e9020f17a5de7e45 (image=quay.io/ceph/grafana:10.4.0, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 11 09:18:52 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-haproxy-nfs-cephfs-compute-0-qtoxfz[95274]: [WARNING] 344/091852 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 11 09:18:52 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-alertmanager-compute-0[101760]: ts=2025-12-11T09:18:52.404Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.003829759s
Dec 11 09:18:52 compute-0 podman[102897]: 2025-12-11 09:18:52.652648569 +0000 UTC m=+0.055914135 container exec 90c28ad7bc328840534f9285d34005394d82c163694e46ece6c6c1f2d03e7fe2 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 09:18:52 compute-0 podman[102897]: 2025-12-11 09:18:52.706718516 +0000 UTC m=+0.109984082 container exec_died 90c28ad7bc328840534f9285d34005394d82c163694e46ece6c6c1f2d03e7fe2 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 09:18:52 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Dec 11 09:18:52 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Dec 11 09:18:52 compute-0 ceph-mon[74426]: osdmap e130: 3 total, 3 up, 3 in
Dec 11 09:18:52 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Dec 11 09:18:52 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Dec 11 09:18:52 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 131 pg[10.19( v 56'1088 (0'0,56'1088] local-lis/les=0/0 n=7 ec=61/50 lis/c=129/91 les/c/f=130/92/0 sis=131) [1] r=0 lpr=131 pi=[91,131)/1 luod=0'0 crt=56'1088 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:18:52 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 131 pg[10.19( v 56'1088 (0'0,56'1088] local-lis/les=0/0 n=7 ec=61/50 lis/c=129/91 les/c/f=130/92/0 sis=131) [1] r=0 lpr=131 pi=[91,131)/1 crt=56'1088 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:18:52 compute-0 sudo[102193]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:52 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:52 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef4c0036c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:52 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 11 09:18:52 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:52 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 11 09:18:52 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:52 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 11 09:18:52 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:18:52 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 11 09:18:52 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 11 09:18:52 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 11 09:18:52 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:52 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 11 09:18:52 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:52 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 11 09:18:52 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 11 09:18:52 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 11 09:18:52 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 11 09:18:52 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 11 09:18:52 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:18:52 compute-0 sudo[102941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:18:52 compute-0 sudo[102941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:52 compute-0 sudo[102941]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:52 compute-0 sudo[102966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 11 09:18:52 compute-0 sudo[102966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:52 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:18:52 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:52 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:18:52.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:53 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:18:53 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:53 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:18:53.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:53 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v36: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 1.5 KiB/s wr, 6 op/s
Dec 11 09:18:53 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0)
Dec 11 09:18:53 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Dec 11 09:18:53 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:53 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef60003490 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:53 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 11 09:18:53 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 11 09:18:53 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] scanning for idle connections..
Dec 11 09:18:53 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] cleaning up connections: []
Dec 11 09:18:53 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] scanning for idle connections..
Dec 11 09:18:53 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] cleaning up connections: []
Dec 11 09:18:53 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] scanning for idle connections..
Dec 11 09:18:53 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] cleaning up connections: []
Dec 11 09:18:53 compute-0 podman[103031]: 2025-12-11 09:18:53.389420884 +0000 UTC m=+0.045242155 container create 15845f92e800d88d8f0c2bdb182392bf7fd6bbc0eda1bbeee8743a2058130c76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_jennings, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec 11 09:18:53 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:53 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef70004500 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:53 compute-0 systemd[1]: Started libpod-conmon-15845f92e800d88d8f0c2bdb182392bf7fd6bbc0eda1bbeee8743a2058130c76.scope.
Dec 11 09:18:53 compute-0 podman[103031]: 2025-12-11 09:18:53.368011754 +0000 UTC m=+0.023833055 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:18:53 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:18:53 compute-0 podman[103031]: 2025-12-11 09:18:53.487695174 +0000 UTC m=+0.143516475 container init 15845f92e800d88d8f0c2bdb182392bf7fd6bbc0eda1bbeee8743a2058130c76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:18:53 compute-0 podman[103031]: 2025-12-11 09:18:53.495184095 +0000 UTC m=+0.151005366 container start 15845f92e800d88d8f0c2bdb182392bf7fd6bbc0eda1bbeee8743a2058130c76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_jennings, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Dec 11 09:18:53 compute-0 elastic_jennings[103047]: 167 167
Dec 11 09:18:53 compute-0 podman[103031]: 2025-12-11 09:18:53.499625272 +0000 UTC m=+0.155446543 container attach 15845f92e800d88d8f0c2bdb182392bf7fd6bbc0eda1bbeee8743a2058130c76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_jennings, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:18:53 compute-0 systemd[1]: libpod-15845f92e800d88d8f0c2bdb182392bf7fd6bbc0eda1bbeee8743a2058130c76.scope: Deactivated successfully.
Dec 11 09:18:53 compute-0 podman[103031]: 2025-12-11 09:18:53.502022076 +0000 UTC m=+0.157843367 container died 15845f92e800d88d8f0c2bdb182392bf7fd6bbc0eda1bbeee8743a2058130c76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_jennings, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:18:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ae5989e2734d4079e0a79b9f48bdda4f02cd7c7147a0528978ab5e0513c894c-merged.mount: Deactivated successfully.
Dec 11 09:18:53 compute-0 podman[103031]: 2025-12-11 09:18:53.547120696 +0000 UTC m=+0.202941967 container remove 15845f92e800d88d8f0c2bdb182392bf7fd6bbc0eda1bbeee8743a2058130c76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_jennings, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Dec 11 09:18:53 compute-0 systemd[1]: libpod-conmon-15845f92e800d88d8f0c2bdb182392bf7fd6bbc0eda1bbeee8743a2058130c76.scope: Deactivated successfully.
Dec 11 09:18:53 compute-0 podman[103069]: 2025-12-11 09:18:53.718268913 +0000 UTC m=+0.042506242 container create 2ce1591bd4e06aef4db0e91177b16b60f1033bbbf17b79ada7e898ad285f9576 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_mcnulty, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:18:53 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Dec 11 09:18:53 compute-0 ceph-mon[74426]: osdmap e131: 3 total, 3 up, 3 in
Dec 11 09:18:53 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:53 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:53 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:18:53 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 11 09:18:53 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:53 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:53 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 11 09:18:53 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 11 09:18:53 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:18:53 compute-0 ceph-mon[74426]: pgmap v36: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 1.5 KiB/s wr, 6 op/s
Dec 11 09:18:53 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Dec 11 09:18:53 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 11 09:18:53 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Dec 11 09:18:53 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Dec 11 09:18:53 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Dec 11 09:18:53 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 132 pg[10.19( v 56'1088 (0'0,56'1088] local-lis/les=131/132 n=7 ec=61/50 lis/c=129/91 les/c/f=130/92/0 sis=131) [1] r=0 lpr=131 pi=[91,131)/1 crt=56'1088 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:18:53 compute-0 systemd[1]: Started libpod-conmon-2ce1591bd4e06aef4db0e91177b16b60f1033bbbf17b79ada7e898ad285f9576.scope.
Dec 11 09:18:53 compute-0 podman[103069]: 2025-12-11 09:18:53.700619949 +0000 UTC m=+0.024857308 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:18:53 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:18:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35dc1d2daf71e82f67237235038e44bc3202d7aff7bead06b26a68da0623f079/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 11 09:18:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35dc1d2daf71e82f67237235038e44bc3202d7aff7bead06b26a68da0623f079/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:18:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35dc1d2daf71e82f67237235038e44bc3202d7aff7bead06b26a68da0623f079/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:18:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35dc1d2daf71e82f67237235038e44bc3202d7aff7bead06b26a68da0623f079/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 11 09:18:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35dc1d2daf71e82f67237235038e44bc3202d7aff7bead06b26a68da0623f079/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 11 09:18:53 compute-0 podman[103069]: 2025-12-11 09:18:53.823799056 +0000 UTC m=+0.148036415 container init 2ce1591bd4e06aef4db0e91177b16b60f1033bbbf17b79ada7e898ad285f9576 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec 11 09:18:53 compute-0 podman[103069]: 2025-12-11 09:18:53.830105571 +0000 UTC m=+0.154342890 container start 2ce1591bd4e06aef4db0e91177b16b60f1033bbbf17b79ada7e898ad285f9576 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_mcnulty, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True)
Dec 11 09:18:53 compute-0 podman[103069]: 2025-12-11 09:18:53.833443934 +0000 UTC m=+0.157681273 container attach 2ce1591bd4e06aef4db0e91177b16b60f1033bbbf17b79ada7e898ad285f9576 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_mcnulty, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 11 09:18:54 compute-0 dazzling_mcnulty[103085]: --> passed data devices: 0 physical, 1 LVM
Dec 11 09:18:54 compute-0 dazzling_mcnulty[103085]: --> All data devices are unavailable
Dec 11 09:18:54 compute-0 systemd[1]: libpod-2ce1591bd4e06aef4db0e91177b16b60f1033bbbf17b79ada7e898ad285f9576.scope: Deactivated successfully.
Dec 11 09:18:54 compute-0 podman[103069]: 2025-12-11 09:18:54.191193524 +0000 UTC m=+0.515430863 container died 2ce1591bd4e06aef4db0e91177b16b60f1033bbbf17b79ada7e898ad285f9576 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_mcnulty, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default)
Dec 11 09:18:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-35dc1d2daf71e82f67237235038e44bc3202d7aff7bead06b26a68da0623f079-merged.mount: Deactivated successfully.
Dec 11 09:18:54 compute-0 podman[103069]: 2025-12-11 09:18:54.236970725 +0000 UTC m=+0.561208054 container remove 2ce1591bd4e06aef4db0e91177b16b60f1033bbbf17b79ada7e898ad285f9576 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_mcnulty, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec 11 09:18:54 compute-0 systemd[1]: libpod-conmon-2ce1591bd4e06aef4db0e91177b16b60f1033bbbf17b79ada7e898ad285f9576.scope: Deactivated successfully.
Dec 11 09:18:54 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 132 pg[10.1b( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=95/95 les/c/f=96/96/0 sis=132) [1] r=0 lpr=132 pi=[95,132)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:18:54 compute-0 sudo[102966]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:54 compute-0 sudo[103111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:18:54 compute-0 sudo[103111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:54 compute-0 sudo[103111]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:54 compute-0 sudo[103136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -- lvm list --format json
Dec 11 09:18:54 compute-0 sudo[103136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:54 compute-0 ceph-mgr[74715]: [prometheus INFO cherrypy.access.139680037156848] ::ffff:192.168.122.100 - - [11/Dec/2025:09:18:54] "GET /metrics HTTP/1.1" 200 48227 "" "Prometheus/2.51.0"
Dec 11 09:18:54 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: ::ffff:192.168.122.100 - - [11/Dec/2025:09:18:54] "GET /metrics HTTP/1.1" 200 48227 "" "Prometheus/2.51.0"
Dec 11 09:18:54 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Dec 11 09:18:54 compute-0 ceph-mon[74426]: osdmap e132: 3 total, 3 up, 3 in
Dec 11 09:18:54 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:54 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef70004500 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:54 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-haproxy-nfs-cephfs-compute-0-qtoxfz[95274]: [WARNING] 344/091854 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 11 09:18:54 compute-0 podman[103203]: 2025-12-11 09:18:54.821779875 +0000 UTC m=+0.042825491 container create 74c7373f92c0a87a08be819f99f1b1243bc9cdf41a0eff4db37189f4e9f8155b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_davinci, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 11 09:18:54 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Dec 11 09:18:54 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Dec 11 09:18:54 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Dec 11 09:18:54 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 133 pg[10.1b( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=95/95 les/c/f=96/96/0 sis=133) [1]/[0] r=-1 lpr=133 pi=[95,133)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:18:54 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 133 pg[10.1b( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=95/95 les/c/f=96/96/0 sis=133) [1]/[0] r=-1 lpr=133 pi=[95,133)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 11 09:18:54 compute-0 systemd[1]: Started libpod-conmon-74c7373f92c0a87a08be819f99f1b1243bc9cdf41a0eff4db37189f4e9f8155b.scope.
Dec 11 09:18:54 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:18:54 compute-0 podman[103203]: 2025-12-11 09:18:54.804493202 +0000 UTC m=+0.025538838 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:18:54 compute-0 podman[103203]: 2025-12-11 09:18:54.906016552 +0000 UTC m=+0.127062178 container init 74c7373f92c0a87a08be819f99f1b1243bc9cdf41a0eff4db37189f4e9f8155b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_davinci, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid)
Dec 11 09:18:54 compute-0 podman[103203]: 2025-12-11 09:18:54.914124362 +0000 UTC m=+0.135169978 container start 74c7373f92c0a87a08be819f99f1b1243bc9cdf41a0eff4db37189f4e9f8155b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:18:54 compute-0 podman[103203]: 2025-12-11 09:18:54.917823676 +0000 UTC m=+0.138869292 container attach 74c7373f92c0a87a08be819f99f1b1243bc9cdf41a0eff4db37189f4e9f8155b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_davinci, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 11 09:18:54 compute-0 objective_davinci[103219]: 167 167
Dec 11 09:18:54 compute-0 systemd[1]: libpod-74c7373f92c0a87a08be819f99f1b1243bc9cdf41a0eff4db37189f4e9f8155b.scope: Deactivated successfully.
Dec 11 09:18:54 compute-0 podman[103203]: 2025-12-11 09:18:54.919444876 +0000 UTC m=+0.140490502 container died 74c7373f92c0a87a08be819f99f1b1243bc9cdf41a0eff4db37189f4e9f8155b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_davinci, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:18:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-29fc52ae898cc2abb7f11d7fc211e5af0b757177e8756b0328525f2b0b7e8daa-merged.mount: Deactivated successfully.
Dec 11 09:18:54 compute-0 podman[103203]: 2025-12-11 09:18:54.958399327 +0000 UTC m=+0.179444943 container remove 74c7373f92c0a87a08be819f99f1b1243bc9cdf41a0eff4db37189f4e9f8155b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_davinci, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 11 09:18:54 compute-0 systemd[1]: libpod-conmon-74c7373f92c0a87a08be819f99f1b1243bc9cdf41a0eff4db37189f4e9f8155b.scope: Deactivated successfully.
Dec 11 09:18:55 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:18:55 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:55 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:18:54.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:55 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:18:55 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000031s ======
Dec 11 09:18:55 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:18:55.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 11 09:18:55 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v39: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 B/s wr, 0 op/s; 27 B/s, 1 objects/s recovering
Dec 11 09:18:55 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0)
Dec 11 09:18:55 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Dec 11 09:18:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:55 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef4c0036c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:55 compute-0 podman[103241]: 2025-12-11 09:18:55.135718174 +0000 UTC m=+0.055119311 container create 72ff9261d137d4744e3d7ba41fccfdd7ff9392eed581d7b7d54dc8728634b37d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec 11 09:18:55 compute-0 systemd[1]: Started libpod-conmon-72ff9261d137d4744e3d7ba41fccfdd7ff9392eed581d7b7d54dc8728634b37d.scope.
Dec 11 09:18:55 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:18:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32fc23b84681d974145aa14d3f24a83db29c31ed391b75aa23947b533d9db649/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 11 09:18:55 compute-0 podman[103241]: 2025-12-11 09:18:55.111573839 +0000 UTC m=+0.030974986 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:18:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32fc23b84681d974145aa14d3f24a83db29c31ed391b75aa23947b533d9db649/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:18:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32fc23b84681d974145aa14d3f24a83db29c31ed391b75aa23947b533d9db649/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:18:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32fc23b84681d974145aa14d3f24a83db29c31ed391b75aa23947b533d9db649/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 11 09:18:55 compute-0 podman[103241]: 2025-12-11 09:18:55.221245101 +0000 UTC m=+0.140646248 container init 72ff9261d137d4744e3d7ba41fccfdd7ff9392eed581d7b7d54dc8728634b37d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_franklin, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:18:55 compute-0 podman[103241]: 2025-12-11 09:18:55.229920348 +0000 UTC m=+0.149321465 container start 72ff9261d137d4744e3d7ba41fccfdd7ff9392eed581d7b7d54dc8728634b37d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_franklin, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:18:55 compute-0 podman[103241]: 2025-12-11 09:18:55.234043475 +0000 UTC m=+0.153444602 container attach 72ff9261d137d4744e3d7ba41fccfdd7ff9392eed581d7b7d54dc8728634b37d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 11 09:18:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:55 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef60003490 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:55 compute-0 wonderful_franklin[103257]: {
Dec 11 09:18:55 compute-0 wonderful_franklin[103257]:     "1": [
Dec 11 09:18:55 compute-0 wonderful_franklin[103257]:         {
Dec 11 09:18:55 compute-0 wonderful_franklin[103257]:             "devices": [
Dec 11 09:18:55 compute-0 wonderful_franklin[103257]:                 "/dev/loop3"
Dec 11 09:18:55 compute-0 wonderful_franklin[103257]:             ],
Dec 11 09:18:55 compute-0 wonderful_franklin[103257]:             "lv_name": "ceph_lv0",
Dec 11 09:18:55 compute-0 wonderful_franklin[103257]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 11 09:18:55 compute-0 wonderful_franklin[103257]:             "lv_size": "21470642176",
Dec 11 09:18:55 compute-0 wonderful_franklin[103257]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TIjPv6-JN4c-U1Hf-F3k1-2mgj-CwGi-pbjIHh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=865a308b-cdc3-4034-b5eb-feb596b462bf,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 11 09:18:55 compute-0 wonderful_franklin[103257]:             "lv_uuid": "TIjPv6-JN4c-U1Hf-F3k1-2mgj-CwGi-pbjIHh",
Dec 11 09:18:55 compute-0 wonderful_franklin[103257]:             "name": "ceph_lv0",
Dec 11 09:18:55 compute-0 wonderful_franklin[103257]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 11 09:18:55 compute-0 wonderful_franklin[103257]:             "tags": {
Dec 11 09:18:55 compute-0 wonderful_franklin[103257]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 11 09:18:55 compute-0 wonderful_franklin[103257]:                 "ceph.block_uuid": "TIjPv6-JN4c-U1Hf-F3k1-2mgj-CwGi-pbjIHh",
Dec 11 09:18:55 compute-0 wonderful_franklin[103257]:                 "ceph.cephx_lockbox_secret": "",
Dec 11 09:18:55 compute-0 wonderful_franklin[103257]:                 "ceph.cluster_fsid": "31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060",
Dec 11 09:18:55 compute-0 wonderful_franklin[103257]:                 "ceph.cluster_name": "ceph",
Dec 11 09:18:55 compute-0 wonderful_franklin[103257]:                 "ceph.crush_device_class": "",
Dec 11 09:18:55 compute-0 wonderful_franklin[103257]:                 "ceph.encrypted": "0",
Dec 11 09:18:55 compute-0 wonderful_franklin[103257]:                 "ceph.osd_fsid": "865a308b-cdc3-4034-b5eb-feb596b462bf",
Dec 11 09:18:55 compute-0 wonderful_franklin[103257]:                 "ceph.osd_id": "1",
Dec 11 09:18:55 compute-0 wonderful_franklin[103257]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 11 09:18:55 compute-0 wonderful_franklin[103257]:                 "ceph.type": "block",
Dec 11 09:18:55 compute-0 wonderful_franklin[103257]:                 "ceph.vdo": "0",
Dec 11 09:18:55 compute-0 wonderful_franklin[103257]:                 "ceph.with_tpm": "0"
Dec 11 09:18:55 compute-0 wonderful_franklin[103257]:             },
Dec 11 09:18:55 compute-0 wonderful_franklin[103257]:             "type": "block",
Dec 11 09:18:55 compute-0 wonderful_franklin[103257]:             "vg_name": "ceph_vg0"
Dec 11 09:18:55 compute-0 wonderful_franklin[103257]:         }
Dec 11 09:18:55 compute-0 wonderful_franklin[103257]:     ]
Dec 11 09:18:55 compute-0 wonderful_franklin[103257]: }
Dec 11 09:18:55 compute-0 systemd[1]: libpod-72ff9261d137d4744e3d7ba41fccfdd7ff9392eed581d7b7d54dc8728634b37d.scope: Deactivated successfully.
Dec 11 09:18:55 compute-0 podman[103241]: 2025-12-11 09:18:55.540830634 +0000 UTC m=+0.460231761 container died 72ff9261d137d4744e3d7ba41fccfdd7ff9392eed581d7b7d54dc8728634b37d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 11 09:18:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-32fc23b84681d974145aa14d3f24a83db29c31ed391b75aa23947b533d9db649-merged.mount: Deactivated successfully.
Dec 11 09:18:55 compute-0 podman[103241]: 2025-12-11 09:18:55.63249342 +0000 UTC m=+0.551894527 container remove 72ff9261d137d4744e3d7ba41fccfdd7ff9392eed581d7b7d54dc8728634b37d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_franklin, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:18:55 compute-0 systemd[1]: libpod-conmon-72ff9261d137d4744e3d7ba41fccfdd7ff9392eed581d7b7d54dc8728634b37d.scope: Deactivated successfully.
Dec 11 09:18:55 compute-0 sudo[103136]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:55 compute-0 sudo[103277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:18:55 compute-0 sudo[103277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:55 compute-0 sudo[103277]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:55 compute-0 sudo[103302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -- raw list --format json
Dec 11 09:18:55 compute-0 sudo[103302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:55 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Dec 11 09:18:55 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Dec 11 09:18:55 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Dec 11 09:18:55 compute-0 ceph-mon[74426]: osdmap e133: 3 total, 3 up, 3 in
Dec 11 09:18:55 compute-0 ceph-mon[74426]: pgmap v39: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 B/s wr, 0 op/s; 27 B/s, 1 objects/s recovering
Dec 11 09:18:55 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Dec 11 09:18:55 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Dec 11 09:18:56 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:18:56 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Dec 11 09:18:56 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Dec 11 09:18:56 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Dec 11 09:18:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 135 pg[10.1b( v 56'1088 (0'0,56'1088] local-lis/les=0/0 n=2 ec=61/50 lis/c=133/95 les/c/f=134/96/0 sis=135) [1] r=0 lpr=135 pi=[95,135)/1 luod=0'0 crt=56'1088 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:18:56 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 135 pg[10.1b( v 56'1088 (0'0,56'1088] local-lis/les=0/0 n=2 ec=61/50 lis/c=133/95 les/c/f=134/96/0 sis=135) [1] r=0 lpr=135 pi=[95,135)/1 crt=56'1088 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:18:56 compute-0 podman[103366]: 2025-12-11 09:18:56.289426163 +0000 UTC m=+0.051832458 container create 36f5c092cfa58e5fabb7e193dc1b8f552e40e3140d30727363285c81ca17ab5d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:18:56 compute-0 systemd[1]: Started libpod-conmon-36f5c092cfa58e5fabb7e193dc1b8f552e40e3140d30727363285c81ca17ab5d.scope.
Dec 11 09:18:56 compute-0 podman[103366]: 2025-12-11 09:18:56.268849569 +0000 UTC m=+0.031255864 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:18:56 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:18:56 compute-0 podman[103366]: 2025-12-11 09:18:56.37980588 +0000 UTC m=+0.142212195 container init 36f5c092cfa58e5fabb7e193dc1b8f552e40e3140d30727363285c81ca17ab5d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_swartz, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec 11 09:18:56 compute-0 podman[103366]: 2025-12-11 09:18:56.386201527 +0000 UTC m=+0.148607822 container start 36f5c092cfa58e5fabb7e193dc1b8f552e40e3140d30727363285c81ca17ab5d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:18:56 compute-0 fervent_swartz[103382]: 167 167
Dec 11 09:18:56 compute-0 systemd[1]: libpod-36f5c092cfa58e5fabb7e193dc1b8f552e40e3140d30727363285c81ca17ab5d.scope: Deactivated successfully.
Dec 11 09:18:56 compute-0 conmon[103382]: conmon 36f5c092cfa58e5fabb7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-36f5c092cfa58e5fabb7e193dc1b8f552e40e3140d30727363285c81ca17ab5d.scope/container/memory.events
Dec 11 09:18:56 compute-0 podman[103366]: 2025-12-11 09:18:56.626768844 +0000 UTC m=+0.389175139 container attach 36f5c092cfa58e5fabb7e193dc1b8f552e40e3140d30727363285c81ca17ab5d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_swartz, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Dec 11 09:18:56 compute-0 podman[103366]: 2025-12-11 09:18:56.627460236 +0000 UTC m=+0.389866531 container died 36f5c092cfa58e5fabb7e193dc1b8f552e40e3140d30727363285c81ca17ab5d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_swartz, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:18:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-50b1cb19ba6006458f08bcbb786f84d81b337643bd763dd45bafd4d422f64819-merged.mount: Deactivated successfully.
Dec 11 09:18:56 compute-0 podman[103366]: 2025-12-11 09:18:56.757371621 +0000 UTC m=+0.519777906 container remove 36f5c092cfa58e5fabb7e193dc1b8f552e40e3140d30727363285c81ca17ab5d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_swartz, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:18:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:56 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef540014d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:56 compute-0 systemd[1]: libpod-conmon-36f5c092cfa58e5fabb7e193dc1b8f552e40e3140d30727363285c81ca17ab5d.scope: Deactivated successfully.
Dec 11 09:18:56 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Dec 11 09:18:56 compute-0 ceph-mon[74426]: osdmap e134: 3 total, 3 up, 3 in
Dec 11 09:18:56 compute-0 ceph-mon[74426]: osdmap e135: 3 total, 3 up, 3 in
Dec 11 09:18:56 compute-0 sudo[103405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 11 09:18:56 compute-0 sudo[103405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:56 compute-0 sudo[103405]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:56 compute-0 podman[103411]: 2025-12-11 09:18:56.958959916 +0000 UTC m=+0.070455943 container create 5cb58127d645e475e234652cde1d58f6ae4dd8990a358e1d5cff0844297dd5eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_fermat, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:18:57 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:18:57 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:57 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:18:57.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:57 compute-0 podman[103411]: 2025-12-11 09:18:56.916049544 +0000 UTC m=+0.027545591 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:18:57 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:18:57 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:57 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:18:57.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:57 compute-0 systemd[1]: Started libpod-conmon-5cb58127d645e475e234652cde1d58f6ae4dd8990a358e1d5cff0844297dd5eb.scope.
Dec 11 09:18:57 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:18:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62afaeada3720a53d9586a53f8b24ff75ff1fa0046d96d24cfa6e472cdc89a0d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 11 09:18:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62afaeada3720a53d9586a53f8b24ff75ff1fa0046d96d24cfa6e472cdc89a0d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:18:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62afaeada3720a53d9586a53f8b24ff75ff1fa0046d96d24cfa6e472cdc89a0d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:18:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62afaeada3720a53d9586a53f8b24ff75ff1fa0046d96d24cfa6e472cdc89a0d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 11 09:18:57 compute-0 podman[103411]: 2025-12-11 09:18:57.072859019 +0000 UTC m=+0.184355086 container init 5cb58127d645e475e234652cde1d58f6ae4dd8990a358e1d5cff0844297dd5eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec 11 09:18:57 compute-0 podman[103411]: 2025-12-11 09:18:57.082067902 +0000 UTC m=+0.193563929 container start 5cb58127d645e475e234652cde1d58f6ae4dd8990a358e1d5cff0844297dd5eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_fermat, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:18:57 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v42: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 B/s wr, 0 op/s; 27 B/s, 1 objects/s recovering
Dec 11 09:18:57 compute-0 podman[103411]: 2025-12-11 09:18:57.08785044 +0000 UTC m=+0.199346467 container attach 5cb58127d645e475e234652cde1d58f6ae4dd8990a358e1d5cff0844297dd5eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_fermat, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:18:57 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0)
Dec 11 09:18:57 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Dec 11 09:18:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:57 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef70004520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:57 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Dec 11 09:18:57 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Dec 11 09:18:57 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Dec 11 09:18:57 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Dec 11 09:18:57 compute-0 ceph-osd[82859]: osd.1 pg_epoch: 136 pg[10.1b( v 56'1088 (0'0,56'1088] local-lis/les=135/136 n=2 ec=61/50 lis/c=133/95 les/c/f=134/96/0 sis=135) [1] r=0 lpr=135 pi=[95,135)/1 crt=56'1088 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:18:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:57 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef70004520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:57 compute-0 lvm[103522]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 11 09:18:57 compute-0 lvm[103522]: VG ceph_vg0 finished
Dec 11 09:18:57 compute-0 zen_fermat[103448]: {}
Dec 11 09:18:57 compute-0 systemd[1]: libpod-5cb58127d645e475e234652cde1d58f6ae4dd8990a358e1d5cff0844297dd5eb.scope: Deactivated successfully.
Dec 11 09:18:57 compute-0 systemd[1]: libpod-5cb58127d645e475e234652cde1d58f6ae4dd8990a358e1d5cff0844297dd5eb.scope: Consumed 1.208s CPU time.
Dec 11 09:18:57 compute-0 podman[103411]: 2025-12-11 09:18:57.840164726 +0000 UTC m=+0.951660753 container died 5cb58127d645e475e234652cde1d58f6ae4dd8990a358e1d5cff0844297dd5eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_fermat, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:18:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-62afaeada3720a53d9586a53f8b24ff75ff1fa0046d96d24cfa6e472cdc89a0d-merged.mount: Deactivated successfully.
Dec 11 09:18:57 compute-0 podman[103411]: 2025-12-11 09:18:57.883444089 +0000 UTC m=+0.994940116 container remove 5cb58127d645e475e234652cde1d58f6ae4dd8990a358e1d5cff0844297dd5eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_fermat, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec 11 09:18:57 compute-0 ceph-mon[74426]: pgmap v42: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 B/s wr, 0 op/s; 27 B/s, 1 objects/s recovering
Dec 11 09:18:57 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Dec 11 09:18:57 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Dec 11 09:18:57 compute-0 ceph-mon[74426]: osdmap e136: 3 total, 3 up, 3 in
Dec 11 09:18:57 compute-0 systemd[1]: libpod-conmon-5cb58127d645e475e234652cde1d58f6ae4dd8990a358e1d5cff0844297dd5eb.scope: Deactivated successfully.
Dec 11 09:18:57 compute-0 sudo[103562]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgvghastvbhaqgdwqnlrxdfmqffzpzkt ; /usr/bin/python3'
Dec 11 09:18:57 compute-0 sudo[103562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:18:57 compute-0 sudo[103302]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:57 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 11 09:18:57 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:57 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 11 09:18:57 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:58 compute-0 sudo[103565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 11 09:18:58 compute-0 sudo[103565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:58 compute-0 sudo[103565]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:58 compute-0 python3[103564]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps --daemon-type rgw --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:18:58 compute-0 podman[103590]: 2025-12-11 09:18:58.119665053 +0000 UTC m=+0.042479651 container create f31fed6222244231a63ae396d56ce0ebc96ce7fbd50f57bfa309f790594d4d98 (image=quay.io/ceph/ceph:v19, name=zealous_margulis, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 11 09:18:58 compute-0 systemd[1]: Started libpod-conmon-f31fed6222244231a63ae396d56ce0ebc96ce7fbd50f57bfa309f790594d4d98.scope.
Dec 11 09:18:58 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:18:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd1349ca961a7c839533cce9348d61146147f07a30b0dc9abe1a22314b6c6c39/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:18:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd1349ca961a7c839533cce9348d61146147f07a30b0dc9abe1a22314b6c6c39/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:18:58 compute-0 podman[103590]: 2025-12-11 09:18:58.102134232 +0000 UTC m=+0.024948850 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:18:58 compute-0 podman[103590]: 2025-12-11 09:18:58.198367849 +0000 UTC m=+0.121182447 container init f31fed6222244231a63ae396d56ce0ebc96ce7fbd50f57bfa309f790594d4d98 (image=quay.io/ceph/ceph:v19, name=zealous_margulis, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:18:58 compute-0 podman[103590]: 2025-12-11 09:18:58.205982304 +0000 UTC m=+0.128796902 container start f31fed6222244231a63ae396d56ce0ebc96ce7fbd50f57bfa309f790594d4d98 (image=quay.io/ceph/ceph:v19, name=zealous_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec 11 09:18:58 compute-0 podman[103590]: 2025-12-11 09:18:58.209228374 +0000 UTC m=+0.132042992 container attach f31fed6222244231a63ae396d56ce0ebc96ce7fbd50f57bfa309f790594d4d98 (image=quay.io/ceph/ceph:v19, name=zealous_margulis, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 11 09:18:58 compute-0 zealous_margulis[103606]: ERROR: invalid flag --daemon-type
Dec 11 09:18:58 compute-0 systemd[1]: libpod-f31fed6222244231a63ae396d56ce0ebc96ce7fbd50f57bfa309f790594d4d98.scope: Deactivated successfully.
Dec 11 09:18:58 compute-0 podman[103590]: 2025-12-11 09:18:58.270541034 +0000 UTC m=+0.193355642 container died f31fed6222244231a63ae396d56ce0ebc96ce7fbd50f57bfa309f790594d4d98 (image=quay.io/ceph/ceph:v19, name=zealous_margulis, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 11 09:18:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd1349ca961a7c839533cce9348d61146147f07a30b0dc9abe1a22314b6c6c39-merged.mount: Deactivated successfully.
Dec 11 09:18:58 compute-0 podman[103590]: 2025-12-11 09:18:58.310722112 +0000 UTC m=+0.233536730 container remove f31fed6222244231a63ae396d56ce0ebc96ce7fbd50f57bfa309f790594d4d98 (image=quay.io/ceph/ceph:v19, name=zealous_margulis, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec 11 09:18:58 compute-0 systemd[1]: libpod-conmon-f31fed6222244231a63ae396d56ce0ebc96ce7fbd50f57bfa309f790594d4d98.scope: Deactivated successfully.
Dec 11 09:18:58 compute-0 sudo[103562]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:58 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:58 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef60003490 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:59 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:59 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:59 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:18:59 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 11 09:18:59 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:18:59.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 11 09:18:59 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:18:59 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:59 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:18:59.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:59 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v44: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:18:59 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0)
Dec 11 09:18:59 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Dec 11 09:18:59 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:59 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef54002340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:59 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:18:59 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef70004520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:00 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Dec 11 09:19:00 compute-0 ceph-mon[74426]: pgmap v44: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:19:00 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Dec 11 09:19:00 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Dec 11 09:19:00 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Dec 11 09:19:00 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Dec 11 09:19:00 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:00 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef4c0036c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:01 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:19:01 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000031s ======
Dec 11 09:19:01 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:19:01.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 11 09:19:01 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:19:01 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:01 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:19:01.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:01 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Dec 11 09:19:01 compute-0 ceph-mon[74426]: osdmap e137: 3 total, 3 up, 3 in
Dec 11 09:19:01 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v46: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 196 B/s rd, 0 op/s; 21 B/s, 0 objects/s recovering
Dec 11 09:19:01 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 11 09:19:01 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 11 09:19:01 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:01 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef60003490 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:01 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:19:01 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:01 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef54002340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:02 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Dec 11 09:19:02 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 11 09:19:02 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Dec 11 09:19:02 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Dec 11 09:19:02 compute-0 ceph-mon[74426]: pgmap v46: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 196 B/s rd, 0 op/s; 21 B/s, 0 objects/s recovering
Dec 11 09:19:02 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 11 09:19:02 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:02 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef54002340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:03 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:19:03 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 11 09:19:03 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:19:03.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 11 09:19:03 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:19:03 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:03 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:19:03.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:03 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Dec 11 09:19:03 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 11 09:19:03 compute-0 ceph-mon[74426]: osdmap e138: 3 total, 3 up, 3 in
Dec 11 09:19:03 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Dec 11 09:19:03 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Dec 11 09:19:03 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v49: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 175 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Dec 11 09:19:03 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:03 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef4c0036c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:03 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:03 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef60003490 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:04 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Dec 11 09:19:04 compute-0 ceph-mon[74426]: osdmap e139: 3 total, 3 up, 3 in
Dec 11 09:19:04 compute-0 ceph-mon[74426]: pgmap v49: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 175 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Dec 11 09:19:04 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Dec 11 09:19:04 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Dec 11 09:19:04 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: ::ffff:192.168.122.100 - - [11/Dec/2025:09:19:04] "GET /metrics HTTP/1.1" 200 48227 "" "Prometheus/2.51.0"
Dec 11 09:19:04 compute-0 ceph-mgr[74715]: [prometheus INFO cherrypy.access.139680037156848] ::ffff:192.168.122.100 - - [11/Dec/2025:09:19:04] "GET /metrics HTTP/1.1" 200 48227 "" "Prometheus/2.51.0"
Dec 11 09:19:04 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:04 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef54002340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:05 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:19:05 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 11 09:19:05 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:19:05.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 11 09:19:05 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:19:05 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:05 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:19:05.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:05 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v51: 353 pgs: 1 activating+remapped, 1 remapped+peering, 351 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 404 B/s rd, 0 op/s; 5/221 objects misplaced (2.262%)
Dec 11 09:19:05 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Dec 11 09:19:05 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Dec 11 09:19:05 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Dec 11 09:19:05 compute-0 ceph-mon[74426]: osdmap e140: 3 total, 3 up, 3 in
Dec 11 09:19:05 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:05 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef70004700 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:05 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:05 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef70004700 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:06 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Dec 11 09:19:06 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Dec 11 09:19:06 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Dec 11 09:19:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:06 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef60003490 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:07 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:19:07 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000031s ======
Dec 11 09:19:07 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:19:07.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 11 09:19:07 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:19:07 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:07 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:19:07.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:07 compute-0 ceph-mon[74426]: pgmap v51: 353 pgs: 1 activating+remapped, 1 remapped+peering, 351 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 404 B/s rd, 0 op/s; 5/221 objects misplaced (2.262%)
Dec 11 09:19:07 compute-0 ceph-mon[74426]: osdmap e141: 3 total, 3 up, 3 in
Dec 11 09:19:07 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v54: 353 pgs: 1 activating+remapped, 1 remapped+peering, 351 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 510 B/s rd, 0 op/s; 5/221 objects misplaced (2.262%)
Dec 11 09:19:07 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:07 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef54002340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:07 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:07 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef70004700 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:08 compute-0 ceph-mon[74426]: osdmap e142: 3 total, 3 up, 3 in
Dec 11 09:19:08 compute-0 ceph-mon[74426]: pgmap v54: 353 pgs: 1 activating+remapped, 1 remapped+peering, 351 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 510 B/s rd, 0 op/s; 5/221 objects misplaced (2.262%)
Dec 11 09:19:08 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 11 09:19:08 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 11 09:19:08 compute-0 sudo[103671]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjvlznjrbtjidwiwuvyoprqykllgnjem ; /usr/bin/python3'
Dec 11 09:19:08 compute-0 sudo[103671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:19:08 compute-0 python3[103673]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps --daemon-type rgw --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:19:08 compute-0 podman[103674]: 2025-12-11 09:19:08.648004813 +0000 UTC m=+0.043868402 container create 2c17bc7fa99ec08eff7bb467224e87c997b0995e80ae1d047f7c3437de04c2dc (image=quay.io/ceph/ceph:v19, name=brave_jackson, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:19:08 compute-0 systemd[1]: Started libpod-conmon-2c17bc7fa99ec08eff7bb467224e87c997b0995e80ae1d047f7c3437de04c2dc.scope.
Dec 11 09:19:08 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:19:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7471b85dfbe47d4fe7e4d2eb80fab5ee11b8a54fdf7ce6b424210eacd8d01288/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:19:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7471b85dfbe47d4fe7e4d2eb80fab5ee11b8a54fdf7ce6b424210eacd8d01288/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:19:08 compute-0 podman[103674]: 2025-12-11 09:19:08.629261297 +0000 UTC m=+0.025124906 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:19:08 compute-0 podman[103674]: 2025-12-11 09:19:08.731865014 +0000 UTC m=+0.127728633 container init 2c17bc7fa99ec08eff7bb467224e87c997b0995e80ae1d047f7c3437de04c2dc (image=quay.io/ceph/ceph:v19, name=brave_jackson, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec 11 09:19:08 compute-0 podman[103674]: 2025-12-11 09:19:08.738852553 +0000 UTC m=+0.134716142 container start 2c17bc7fa99ec08eff7bb467224e87c997b0995e80ae1d047f7c3437de04c2dc (image=quay.io/ceph/ceph:v19, name=brave_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:19:08 compute-0 podman[103674]: 2025-12-11 09:19:08.743197788 +0000 UTC m=+0.139061407 container attach 2c17bc7fa99ec08eff7bb467224e87c997b0995e80ae1d047f7c3437de04c2dc (image=quay.io/ceph/ceph:v19, name=brave_jackson, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:19:08 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:08 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef4c0036c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:08 compute-0 brave_jackson[103690]: ERROR: invalid flag --daemon-type
Dec 11 09:19:08 compute-0 systemd[1]: libpod-2c17bc7fa99ec08eff7bb467224e87c997b0995e80ae1d047f7c3437de04c2dc.scope: Deactivated successfully.
Dec 11 09:19:08 compute-0 podman[103674]: 2025-12-11 09:19:08.801207312 +0000 UTC m=+0.197070921 container died 2c17bc7fa99ec08eff7bb467224e87c997b0995e80ae1d047f7c3437de04c2dc (image=quay.io/ceph/ceph:v19, name=brave_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:19:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-7471b85dfbe47d4fe7e4d2eb80fab5ee11b8a54fdf7ce6b424210eacd8d01288-merged.mount: Deactivated successfully.
Dec 11 09:19:08 compute-0 podman[103674]: 2025-12-11 09:19:08.843365439 +0000 UTC m=+0.239229028 container remove 2c17bc7fa99ec08eff7bb467224e87c997b0995e80ae1d047f7c3437de04c2dc (image=quay.io/ceph/ceph:v19, name=brave_jackson, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:19:08 compute-0 systemd[1]: libpod-conmon-2c17bc7fa99ec08eff7bb467224e87c997b0995e80ae1d047f7c3437de04c2dc.scope: Deactivated successfully.
Dec 11 09:19:08 compute-0 sudo[103671]: pam_unix(sudo:session): session closed for user root
Dec 11 09:19:09 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:19:09 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 11 09:19:09 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:19:09.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 11 09:19:09 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:19:09 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:09 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:19:09.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:09 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 11 09:19:09 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v55: 353 pgs: 1 activating+remapped, 1 remapped+peering, 351 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s; 5/221 objects misplaced (2.262%)
Dec 11 09:19:09 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:09 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef4c0036c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:09 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:09 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef540047c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:10 compute-0 ceph-mon[74426]: pgmap v55: 353 pgs: 1 activating+remapped, 1 remapped+peering, 351 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s; 5/221 objects misplaced (2.262%)
Dec 11 09:19:10 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:10 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef70004720 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:11 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:19:11 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:11 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:19:11.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:11 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:19:11 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 11 09:19:11 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:19:11.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 11 09:19:11 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v56: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 146 B/s rd, 0 op/s; 15 B/s, 1 objects/s recovering
Dec 11 09:19:11 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:11 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef4c0036c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:11 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:19:11 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:11 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef4c0036c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:12 compute-0 ceph-mon[74426]: pgmap v56: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 146 B/s rd, 0 op/s; 15 B/s, 1 objects/s recovering
Dec 11 09:19:12 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:12 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef540047c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:13 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:19:13 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:13 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:19:13.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:13 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:19:13 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 11 09:19:13 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:19:13.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 11 09:19:13 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v57: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 127 B/s rd, 0 op/s; 13 B/s, 1 objects/s recovering
Dec 11 09:19:13 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:13 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef70004720 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:13 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:13 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef4c0036c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:14 compute-0 ceph-mon[74426]: pgmap v57: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 127 B/s rd, 0 op/s; 13 B/s, 1 objects/s recovering
Dec 11 09:19:14 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: ::ffff:192.168.122.100 - - [11/Dec/2025:09:19:14] "GET /metrics HTTP/1.1" 200 48222 "" "Prometheus/2.51.0"
Dec 11 09:19:14 compute-0 ceph-mgr[74715]: [prometheus INFO cherrypy.access.139680037156848] ::ffff:192.168.122.100 - - [11/Dec/2025:09:19:14] "GET /metrics HTTP/1.1" 200 48222 "" "Prometheus/2.51.0"
Dec 11 09:19:14 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:14 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef60003490 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:15 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:19:15 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:15 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:19:15.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:15 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:19:15 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 11 09:19:15 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:19:15.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 11 09:19:15 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v58: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 0 op/s; 11 B/s, 1 objects/s recovering
Dec 11 09:19:15 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:15 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef60003490 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:15 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:15 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef70004740 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:16 compute-0 ceph-mon[74426]: pgmap v58: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 0 op/s; 11 B/s, 1 objects/s recovering
Dec 11 09:19:16 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:19:16 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:16 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef4c0036c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:17 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:19:17 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000031s ======
Dec 11 09:19:17 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:19:17.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 11 09:19:17 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:19:17 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 11 09:19:17 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:19:17.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 11 09:19:17 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v59: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 285 B/s rd, 0 op/s; 10 B/s, 0 objects/s recovering
Dec 11 09:19:17 compute-0 sudo[103730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 11 09:19:17 compute-0 sudo[103730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:19:17 compute-0 sudo[103730]: pam_unix(sudo:session): session closed for user root
Dec 11 09:19:17 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:17 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef4c0036c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:17 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:17 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef540047c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:18 compute-0 ceph-mon[74426]: pgmap v59: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 285 B/s rd, 0 op/s; 10 B/s, 0 objects/s recovering
Dec 11 09:19:18 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:18 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef70004760 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:18 compute-0 sudo[103780]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbwmuqaamrlhraxllboohgxleafxaysu ; /usr/bin/python3'
Dec 11 09:19:18 compute-0 sudo[103780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:19:19 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:19:19 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.002000063s ======
Dec 11 09:19:19 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2035d0 =====
Dec 11 09:19:19 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:19:19.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000063s
Dec 11 09:19:19 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2035d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:19 compute-0 radosgw[93354]: beast: 0x7f36cd2035d0: 192.168.122.100 - anonymous [11/Dec/2025:09:19:19.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:19 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v60: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s; 9 B/s, 0 objects/s recovering
Dec 11 09:19:19 compute-0 python3[103783]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps --daemon-type rgw --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:19:19 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:19 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef60003490 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:19 compute-0 podman[103784]: 2025-12-11 09:19:19.18234387 +0000 UTC m=+0.031265758 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:19:19 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:19 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef60003490 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:19 compute-0 podman[103784]: 2025-12-11 09:19:19.498623407 +0000 UTC m=+0.347545285 container create e930b8652f97dcc445b95549e7c4a1d415560fde63c355584d7eec738a72717a (image=quay.io/ceph/ceph:v19, name=crazy_bell, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec 11 09:19:19 compute-0 systemd[1]: Started libpod-conmon-e930b8652f97dcc445b95549e7c4a1d415560fde63c355584d7eec738a72717a.scope.
Dec 11 09:19:19 compute-0 ceph-mon[74426]: pgmap v60: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s; 9 B/s, 0 objects/s recovering
Dec 11 09:19:19 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:19:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93fd5bc27c755a2e8d9023f115457b498fa047e9b52470c0b385db6d4f5cf2c2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:19:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93fd5bc27c755a2e8d9023f115457b498fa047e9b52470c0b385db6d4f5cf2c2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:19:19 compute-0 podman[103784]: 2025-12-11 09:19:19.837907764 +0000 UTC m=+0.686829672 container init e930b8652f97dcc445b95549e7c4a1d415560fde63c355584d7eec738a72717a (image=quay.io/ceph/ceph:v19, name=crazy_bell, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 11 09:19:19 compute-0 podman[103784]: 2025-12-11 09:19:19.849978631 +0000 UTC m=+0.698900509 container start e930b8652f97dcc445b95549e7c4a1d415560fde63c355584d7eec738a72717a (image=quay.io/ceph/ceph:v19, name=crazy_bell, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 11 09:19:19 compute-0 crazy_bell[103800]: ERROR: invalid flag --daemon-type
Dec 11 09:19:19 compute-0 systemd[1]: libpod-e930b8652f97dcc445b95549e7c4a1d415560fde63c355584d7eec738a72717a.scope: Deactivated successfully.
Dec 11 09:19:20 compute-0 podman[103784]: 2025-12-11 09:19:20.017974683 +0000 UTC m=+0.866896591 container attach e930b8652f97dcc445b95549e7c4a1d415560fde63c355584d7eec738a72717a (image=quay.io/ceph/ceph:v19, name=crazy_bell, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:19:20 compute-0 podman[103784]: 2025-12-11 09:19:20.018567701 +0000 UTC m=+0.867489589 container died e930b8652f97dcc445b95549e7c4a1d415560fde63c355584d7eec738a72717a (image=quay.io/ceph/ceph:v19, name=crazy_bell, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 11 09:19:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-93fd5bc27c755a2e8d9023f115457b498fa047e9b52470c0b385db6d4f5cf2c2-merged.mount: Deactivated successfully.
Dec 11 09:19:20 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:20 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef540047c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:20 compute-0 podman[103784]: 2025-12-11 09:19:20.82212187 +0000 UTC m=+1.671043748 container remove e930b8652f97dcc445b95549e7c4a1d415560fde63c355584d7eec738a72717a (image=quay.io/ceph/ceph:v19, name=crazy_bell, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Dec 11 09:19:20 compute-0 sudo[103780]: pam_unix(sudo:session): session closed for user root
Dec 11 09:19:20 compute-0 systemd[1]: libpod-conmon-e930b8652f97dcc445b95549e7c4a1d415560fde63c355584d7eec738a72717a.scope: Deactivated successfully.
Dec 11 09:19:21 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:19:21 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:21 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:19:21.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:21 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2035d0 =====
Dec 11 09:19:21 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2035d0 op status=0 http_status=200 latency=0.001000031s ======
Dec 11 09:19:21 compute-0 radosgw[93354]: beast: 0x7f36cd2035d0: 192.168.122.100 - anonymous [11/Dec/2025:09:19:21.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 11 09:19:21 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v61: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s; 9 B/s, 0 objects/s recovering
Dec 11 09:19:21 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:21 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef4c0036c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:21 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:19:21 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:21 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef50000f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:22 compute-0 ceph-mon[74426]: pgmap v61: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s; 9 B/s, 0 objects/s recovering
Dec 11 09:19:22 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:22 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef60003540 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:23 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:19:23 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2035d0 =====
Dec 11 09:19:23 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:23 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:19:23.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:23 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2035d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 11 09:19:23 compute-0 radosgw[93354]: beast: 0x7f36cd2035d0: 192.168.122.102 - anonymous [11/Dec/2025:09:19:23.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 11 09:19:23 compute-0 ceph-mgr[74715]: [balancer INFO root] Optimize plan auto_2025-12-11_09:19:23
Dec 11 09:19:23 compute-0 ceph-mgr[74715]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 11 09:19:23 compute-0 ceph-mgr[74715]: [balancer INFO root] do_upmap
Dec 11 09:19:23 compute-0 ceph-mgr[74715]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', '.mgr', 'cephfs.cephfs.meta', 'images', '.nfs', 'default.rgw.meta', 'vms', 'backups', 'cephfs.cephfs.data', 'default.rgw.control', 'volumes']
Dec 11 09:19:23 compute-0 ceph-mgr[74715]: [balancer INFO root] prepared 0/10 upmap changes
Dec 11 09:19:23 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v62: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:19:23 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:23 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef540047c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] _maybe_adjust
Dec 11 09:19:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 11 09:19:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 11 09:19:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 11 09:19:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 11 09:19:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 11 09:19:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 11 09:19:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 11 09:19:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 11 09:19:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 11 09:19:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 11 09:19:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 11 09:19:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 11 09:19:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 11 09:19:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 11 09:19:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 11 09:19:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 11 09:19:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 11 09:19:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 11 09:19:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 11 09:19:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 11 09:19:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 11 09:19:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 11 09:19:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 11 09:19:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 32)
Dec 11 09:19:23 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 11 09:19:23 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 11 09:19:23 compute-0 ceph-mgr[74715]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 11 09:19:23 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 11 09:19:23 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 11 09:19:23 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] scanning for idle connections..
Dec 11 09:19:23 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] cleaning up connections: []
Dec 11 09:19:23 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 11 09:19:23 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 11 09:19:23 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] scanning for idle connections..
Dec 11 09:19:23 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] cleaning up connections: []
Dec 11 09:19:23 compute-0 ceph-mgr[74715]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 11 09:19:23 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 11 09:19:23 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] scanning for idle connections..
Dec 11 09:19:23 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] cleaning up connections: []
Dec 11 09:19:23 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 11 09:19:23 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 11 09:19:23 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 11 09:19:23 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:23 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef580026d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:23 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 11 09:19:23 compute-0 ceph-mon[74426]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Dec 11 09:19:23 compute-0 ceph-mon[74426]: rocksdb: (Original Log Time 2025/12/11-09:19:23.710889) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 11 09:19:23 compute-0 ceph-mon[74426]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Dec 11 09:19:23 compute-0 ceph-mon[74426]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765444763711148, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 2001, "num_deletes": 252, "total_data_size": 5886563, "memory_usage": 6146592, "flush_reason": "Manual Compaction"}
Dec 11 09:19:23 compute-0 ceph-mon[74426]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Dec 11 09:19:23 compute-0 ceph-mon[74426]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765444763763142, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 5492298, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8776, "largest_seqno": 10776, "table_properties": {"data_size": 5482595, "index_size": 6069, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 21769, "raw_average_key_size": 21, "raw_value_size": 5462297, "raw_average_value_size": 5334, "num_data_blocks": 266, "num_entries": 1024, "num_filter_entries": 1024, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765444687, "oldest_key_time": 1765444687, "file_creation_time": 1765444763, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1ac8294-d02a-459a-8058-7f05c4f78e7d", "db_session_id": "U2WUY0PZPH8WS8N5I572", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Dec 11 09:19:23 compute-0 ceph-mon[74426]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 52279 microseconds, and 19478 cpu microseconds.
Dec 11 09:19:23 compute-0 ceph-mon[74426]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 11 09:19:23 compute-0 ceph-mon[74426]: rocksdb: (Original Log Time 2025/12/11-09:19:23.763241) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 5492298 bytes OK
Dec 11 09:19:23 compute-0 ceph-mon[74426]: rocksdb: (Original Log Time 2025/12/11-09:19:23.763286) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Dec 11 09:19:23 compute-0 ceph-mon[74426]: rocksdb: (Original Log Time 2025/12/11-09:19:23.764576) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Dec 11 09:19:23 compute-0 ceph-mon[74426]: rocksdb: (Original Log Time 2025/12/11-09:19:23.764598) EVENT_LOG_v1 {"time_micros": 1765444763764594, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 11 09:19:23 compute-0 ceph-mon[74426]: rocksdb: (Original Log Time 2025/12/11-09:19:23.764614) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 11 09:19:23 compute-0 ceph-mon[74426]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 5877428, prev total WAL file size 5877428, number of live WAL files 2.
Dec 11 09:19:23 compute-0 ceph-mon[74426]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 11 09:19:23 compute-0 ceph-mon[74426]: rocksdb: (Original Log Time 2025/12/11-09:19:23.767330) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Dec 11 09:19:23 compute-0 ceph-mon[74426]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 11 09:19:23 compute-0 ceph-mon[74426]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(5363KB)], [23(12MB)]
Dec 11 09:19:23 compute-0 ceph-mon[74426]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765444763767504, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 18731813, "oldest_snapshot_seqno": -1}
Dec 11 09:19:23 compute-0 ceph-mon[74426]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 4030 keys, 14175362 bytes, temperature: kUnknown
Dec 11 09:19:23 compute-0 ceph-mon[74426]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765444763899882, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 14175362, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14142740, "index_size": 21440, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10117, "raw_key_size": 102849, "raw_average_key_size": 25, "raw_value_size": 14063241, "raw_average_value_size": 3489, "num_data_blocks": 921, "num_entries": 4030, "num_filter_entries": 4030, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765444346, "oldest_key_time": 0, "file_creation_time": 1765444763, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1ac8294-d02a-459a-8058-7f05c4f78e7d", "db_session_id": "U2WUY0PZPH8WS8N5I572", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Dec 11 09:19:23 compute-0 ceph-mon[74426]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 11 09:19:23 compute-0 ceph-mon[74426]: rocksdb: (Original Log Time 2025/12/11-09:19:23.900280) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 14175362 bytes
Dec 11 09:19:23 compute-0 ceph-mon[74426]: rocksdb: (Original Log Time 2025/12/11-09:19:23.906743) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 141.4 rd, 107.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(5.2, 12.6 +0.0 blob) out(13.5 +0.0 blob), read-write-amplify(6.0) write-amplify(2.6) OK, records in: 4567, records dropped: 537 output_compression: NoCompression
Dec 11 09:19:23 compute-0 ceph-mon[74426]: rocksdb: (Original Log Time 2025/12/11-09:19:23.906807) EVENT_LOG_v1 {"time_micros": 1765444763906787, "job": 8, "event": "compaction_finished", "compaction_time_micros": 132504, "compaction_time_cpu_micros": 50689, "output_level": 6, "num_output_files": 1, "total_output_size": 14175362, "num_input_records": 4567, "num_output_records": 4030, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 11 09:19:23 compute-0 ceph-mon[74426]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 11 09:19:23 compute-0 ceph-mon[74426]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765444763908724, "job": 8, "event": "table_file_deletion", "file_number": 25}
Dec 11 09:19:23 compute-0 ceph-mon[74426]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 11 09:19:23 compute-0 ceph-mon[74426]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765444763911136, "job": 8, "event": "table_file_deletion", "file_number": 23}
Dec 11 09:19:23 compute-0 ceph-mon[74426]: rocksdb: (Original Log Time 2025/12/11-09:19:23.767135) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 11 09:19:23 compute-0 ceph-mon[74426]: rocksdb: (Original Log Time 2025/12/11-09:19:23.911235) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 11 09:19:23 compute-0 ceph-mon[74426]: rocksdb: (Original Log Time 2025/12/11-09:19:23.911242) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 11 09:19:23 compute-0 ceph-mon[74426]: rocksdb: (Original Log Time 2025/12/11-09:19:23.911243) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 11 09:19:23 compute-0 ceph-mon[74426]: rocksdb: (Original Log Time 2025/12/11-09:19:23.911245) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 11 09:19:23 compute-0 ceph-mon[74426]: rocksdb: (Original Log Time 2025/12/11-09:19:23.911247) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 11 09:19:24 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: ::ffff:192.168.122.100 - - [11/Dec/2025:09:19:24] "GET /metrics HTTP/1.1" 200 48223 "" "Prometheus/2.51.0"
Dec 11 09:19:24 compute-0 ceph-mgr[74715]: [prometheus INFO cherrypy.access.139680037156848] ::ffff:192.168.122.100 - - [11/Dec/2025:09:19:24] "GET /metrics HTTP/1.1" 200 48223 "" "Prometheus/2.51.0"
Dec 11 09:19:24 compute-0 ceph-mon[74426]: pgmap v62: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:19:24 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:24 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef50000f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:25 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:19:25 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:25 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:19:25.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:25 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:19:25 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000031s ======
Dec 11 09:19:25 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:19:25.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 11 09:19:25 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v63: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 11 09:19:25 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:25 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef50000f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:25 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:25 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef580026d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:25 compute-0 ceph-mon[74426]: pgmap v63: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 11 09:19:26 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:19:26 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:26 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef540047c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:27 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:19:27 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:27 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:19:27.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:27 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:19:27 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000031s ======
Dec 11 09:19:27 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:19:27.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 11 09:19:27 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v64: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:19:27 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:27 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef60003540 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:27 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:27 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef50000f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:28 compute-0 ceph-mon[74426]: pgmap v64: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:19:28 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:28 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef580020f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:29 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:19:29 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:29 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:19:29.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:29 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:19:29 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000031s ======
Dec 11 09:19:29 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:19:29.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 11 09:19:29 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v65: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 11 09:19:29 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:29 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef540047c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:29 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:29 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef60003540 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:30 compute-0 ceph-mon[74426]: pgmap v65: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 11 09:19:30 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:30 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef50002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:30 compute-0 sudo[103869]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdydhaoteqqmsjrdmbaycgotnqytthhi ; /usr/bin/python3'
Dec 11 09:19:30 compute-0 sudo[103869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:19:31 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:19:31 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:31 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:19:31.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:31 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:19:31 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000031s ======
Dec 11 09:19:31 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:19:31.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 11 09:19:31 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v66: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 11 09:19:31 compute-0 python3[103871]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps --daemon-type rgw --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:19:31 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:31 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef580020f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:31 compute-0 podman[103873]: 2025-12-11 09:19:31.199130909 +0000 UTC m=+0.051891114 container create 95a867115796007486b08edca98747ce780c79944a8598003c73bd9c02f296b4 (image=quay.io/ceph/ceph:v19, name=bold_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 11 09:19:31 compute-0 systemd[1]: Started libpod-conmon-95a867115796007486b08edca98747ce780c79944a8598003c73bd9c02f296b4.scope.
Dec 11 09:19:31 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:19:31 compute-0 podman[103873]: 2025-12-11 09:19:31.176534252 +0000 UTC m=+0.029294497 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:19:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d56eb92b644d7fc9346b4cecd048bf876371a022bb983f667ce4185c911d4c8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:19:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d56eb92b644d7fc9346b4cecd048bf876371a022bb983f667ce4185c911d4c8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:19:31 compute-0 podman[103873]: 2025-12-11 09:19:31.284724664 +0000 UTC m=+0.137484889 container init 95a867115796007486b08edca98747ce780c79944a8598003c73bd9c02f296b4 (image=quay.io/ceph/ceph:v19, name=bold_einstein, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:19:31 compute-0 podman[103873]: 2025-12-11 09:19:31.29224876 +0000 UTC m=+0.145008995 container start 95a867115796007486b08edca98747ce780c79944a8598003c73bd9c02f296b4 (image=quay.io/ceph/ceph:v19, name=bold_einstein, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:19:31 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:19:31 compute-0 podman[103873]: 2025-12-11 09:19:31.295753539 +0000 UTC m=+0.148513774 container attach 95a867115796007486b08edca98747ce780c79944a8598003c73bd9c02f296b4 (image=quay.io/ceph/ceph:v19, name=bold_einstein, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:19:31 compute-0 bold_einstein[103888]: ERROR: invalid flag --daemon-type
Dec 11 09:19:31 compute-0 systemd[1]: libpod-95a867115796007486b08edca98747ce780c79944a8598003c73bd9c02f296b4.scope: Deactivated successfully.
Dec 11 09:19:31 compute-0 podman[103873]: 2025-12-11 09:19:31.355873869 +0000 UTC m=+0.208634084 container died 95a867115796007486b08edca98747ce780c79944a8598003c73bd9c02f296b4 (image=quay.io/ceph/ceph:v19, name=bold_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:19:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d56eb92b644d7fc9346b4cecd048bf876371a022bb983f667ce4185c911d4c8-merged.mount: Deactivated successfully.
Dec 11 09:19:31 compute-0 podman[103873]: 2025-12-11 09:19:31.39912379 +0000 UTC m=+0.251884005 container remove 95a867115796007486b08edca98747ce780c79944a8598003c73bd9c02f296b4 (image=quay.io/ceph/ceph:v19, name=bold_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:19:31 compute-0 systemd[1]: libpod-conmon-95a867115796007486b08edca98747ce780c79944a8598003c73bd9c02f296b4.scope: Deactivated successfully.
Dec 11 09:19:31 compute-0 sudo[103869]: pam_unix(sudo:session): session closed for user root
Dec 11 09:19:31 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:31 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef540047c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:32 compute-0 ceph-mon[74426]: pgmap v66: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 11 09:19:32 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:32 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef60003540 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:33 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:19:33 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:33 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:19:33.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:33 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:19:33 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 11 09:19:33 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:19:33.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 11 09:19:33 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v67: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:19:33 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:33 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef50002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:33 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:33 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef580020f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:33 compute-0 ceph-mon[74426]: pgmap v67: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:19:34 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: ::ffff:192.168.122.100 - - [11/Dec/2025:09:19:34] "GET /metrics HTTP/1.1" 200 48223 "" "Prometheus/2.51.0"
Dec 11 09:19:34 compute-0 ceph-mgr[74715]: [prometheus INFO cherrypy.access.139680037156848] ::ffff:192.168.122.100 - - [11/Dec/2025:09:19:34] "GET /metrics HTTP/1.1" 200 48223 "" "Prometheus/2.51.0"
Dec 11 09:19:34 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:34 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef540047c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:35 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:19:35 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:35 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:19:35.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:35 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:19:35 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 11 09:19:35 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:19:35.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 11 09:19:35 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v68: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 11 09:19:35 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:35 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef60003560 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:35 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:35 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef50002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:36 compute-0 ceph-mon[74426]: pgmap v68: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 11 09:19:36 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:19:36 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:36 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef58003630 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:36 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-haproxy-nfs-cephfs-compute-0-qtoxfz[95274]: [WARNING] 344/091936 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 11 09:19:37 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:19:37 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:37 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:19:37.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:37 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:19:37 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:37 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:19:37.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:37 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v69: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 11 09:19:37 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:37 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef540047c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:37 compute-0 sudo[103925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 11 09:19:37 compute-0 sudo[103925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:19:37 compute-0 sudo[103925]: pam_unix(sudo:session): session closed for user root
Dec 11 09:19:37 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:37 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef540047c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:38 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 11 09:19:38 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 11 09:19:38 compute-0 ceph-mon[74426]: pgmap v69: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 11 09:19:38 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 11 09:19:38 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:38 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef60003720 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:39 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:19:39 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000031s ======
Dec 11 09:19:39 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:19:39.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 11 09:19:39 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:19:39 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:39 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:19:39.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:39 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v70: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:19:39 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:39 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef58003630 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:39 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:39 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef50003fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:39 compute-0 ceph-mon[74426]: pgmap v70: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:19:40 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:40 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef540047c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:41 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:19:41 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:41 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:19:41.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:41 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:19:41 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:41 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:19:41.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:41 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v71: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 11 09:19:41 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:41 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef540047c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:41 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:19:41 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:41 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef58003630 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:41 compute-0 sudo[103977]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkzvmfprdocwnmiaxctykodylaulbclo ; /usr/bin/python3'
Dec 11 09:19:41 compute-0 sudo[103977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:19:41 compute-0 python3[103979]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps --daemon-type rgw --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:19:41 compute-0 podman[103980]: 2025-12-11 09:19:41.769501713 +0000 UTC m=+0.060979787 container create 338d04ddfbb08980453fc748798ba58062db6b21ebeaf64a8679ba3e93ac4101 (image=quay.io/ceph/ceph:v19, name=funny_dhawan, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 11 09:19:41 compute-0 systemd[1]: Started libpod-conmon-338d04ddfbb08980453fc748798ba58062db6b21ebeaf64a8679ba3e93ac4101.scope.
Dec 11 09:19:41 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:19:41 compute-0 podman[103980]: 2025-12-11 09:19:41.746325518 +0000 UTC m=+0.037803612 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:19:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf847b055f8274d5ffa7b07271004ec41062b262d282eaea8ab73d0594116a00/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:19:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf847b055f8274d5ffa7b07271004ec41062b262d282eaea8ab73d0594116a00/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:19:41 compute-0 podman[103980]: 2025-12-11 09:19:41.861581951 +0000 UTC m=+0.153060045 container init 338d04ddfbb08980453fc748798ba58062db6b21ebeaf64a8679ba3e93ac4101 (image=quay.io/ceph/ceph:v19, name=funny_dhawan, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 11 09:19:41 compute-0 podman[103980]: 2025-12-11 09:19:41.869113917 +0000 UTC m=+0.160591991 container start 338d04ddfbb08980453fc748798ba58062db6b21ebeaf64a8679ba3e93ac4101 (image=quay.io/ceph/ceph:v19, name=funny_dhawan, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True)
Dec 11 09:19:41 compute-0 podman[103980]: 2025-12-11 09:19:41.873249426 +0000 UTC m=+0.164727520 container attach 338d04ddfbb08980453fc748798ba58062db6b21ebeaf64a8679ba3e93ac4101 (image=quay.io/ceph/ceph:v19, name=funny_dhawan, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec 11 09:19:41 compute-0 funny_dhawan[103995]: ERROR: invalid flag --daemon-type
Dec 11 09:19:41 compute-0 systemd[1]: libpod-338d04ddfbb08980453fc748798ba58062db6b21ebeaf64a8679ba3e93ac4101.scope: Deactivated successfully.
Dec 11 09:19:41 compute-0 podman[103980]: 2025-12-11 09:19:41.92935619 +0000 UTC m=+0.220834264 container died 338d04ddfbb08980453fc748798ba58062db6b21ebeaf64a8679ba3e93ac4101 (image=quay.io/ceph/ceph:v19, name=funny_dhawan, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:19:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf847b055f8274d5ffa7b07271004ec41062b262d282eaea8ab73d0594116a00-merged.mount: Deactivated successfully.
Dec 11 09:19:41 compute-0 podman[103980]: 2025-12-11 09:19:41.976341388 +0000 UTC m=+0.267819462 container remove 338d04ddfbb08980453fc748798ba58062db6b21ebeaf64a8679ba3e93ac4101 (image=quay.io/ceph/ceph:v19, name=funny_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Dec 11 09:19:41 compute-0 systemd[1]: libpod-conmon-338d04ddfbb08980453fc748798ba58062db6b21ebeaf64a8679ba3e93ac4101.scope: Deactivated successfully.
Dec 11 09:19:41 compute-0 sudo[103977]: pam_unix(sudo:session): session closed for user root
Dec 11 09:19:42 compute-0 ceph-mon[74426]: pgmap v71: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 11 09:19:42 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:42 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef50003fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:43 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:19:43 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:43 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:19:43.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:43 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:19:43 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:43 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:19:43.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:43 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v72: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 11 09:19:43 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:43 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef60003760 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:43 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:43 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef540047c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:43 compute-0 ceph-mon[74426]: pgmap v72: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 11 09:19:44 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: ::ffff:192.168.122.100 - - [11/Dec/2025:09:19:44] "GET /metrics HTTP/1.1" 200 48225 "" "Prometheus/2.51.0"
Dec 11 09:19:44 compute-0 ceph-mgr[74715]: [prometheus INFO cherrypy.access.139680037156848] ::ffff:192.168.122.100 - - [11/Dec/2025:09:19:44] "GET /metrics HTTP/1.1" 200 48225 "" "Prometheus/2.51.0"
Dec 11 09:19:44 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:44 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef58003630 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:45 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:19:45 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000031s ======
Dec 11 09:19:45 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:19:45.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 11 09:19:45 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:19:45 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:45 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:19:45.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:45 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v73: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Dec 11 09:19:45 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:45 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef50003fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:45 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:45 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef60003780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:46 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:46 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 11 09:19:46 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:19:46 compute-0 ceph-mon[74426]: pgmap v73: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Dec 11 09:19:46 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:46 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef540047c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:47 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:19:47 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:47 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:19:47.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:47 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:19:47 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000031s ======
Dec 11 09:19:47 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:19:47.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 11 09:19:47 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v74: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 11 09:19:47 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:47 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef58003630 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:47 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:47 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef50003fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:48 compute-0 ceph-mon[74426]: pgmap v74: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 11 09:19:48 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:48 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef600037a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:49 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:19:49 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 11 09:19:49 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:19:49.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 11 09:19:49 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:19:49 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 11 09:19:49 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:19:49.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 11 09:19:49 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v75: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Dec 11 09:19:49 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:49 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef540047c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:49 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:49 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 11 09:19:49 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:49 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 11 09:19:49 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:49 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef58003630 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:50 compute-0 ceph-mon[74426]: pgmap v75: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Dec 11 09:19:50 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:50 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef50003fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:51 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:19:51 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000031s ======
Dec 11 09:19:51 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:19:51.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 11 09:19:51 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:19:51 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:51 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:19:51.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:51 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v76: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec 11 09:19:51 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:51 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef600037c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:51 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:19:51 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:51 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef540047c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:51 compute-0 ceph-mon[74426]: pgmap v76: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec 11 09:19:52 compute-0 sudo[104062]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggsbfnponfysblewihkrvjvcausfdurb ; /usr/bin/python3'
Dec 11 09:19:52 compute-0 sudo[104062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:19:52 compute-0 python3[104064]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps --daemon-type rgw --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:19:52 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:52 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 11 09:19:52 compute-0 podman[104065]: 2025-12-11 09:19:52.347791295 +0000 UTC m=+0.070953029 container create af5df82a5a4a9b17716ab08500e021507e1990d8903b753e7f8edb9e018471b0 (image=quay.io/ceph/ceph:v19, name=sad_wescoff, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 11 09:19:52 compute-0 systemd[1]: Started libpod-conmon-af5df82a5a4a9b17716ab08500e021507e1990d8903b753e7f8edb9e018471b0.scope.
Dec 11 09:19:52 compute-0 podman[104065]: 2025-12-11 09:19:52.308054932 +0000 UTC m=+0.031216686 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:19:52 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:19:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b1f87c9ceff280c15bf3468b0d01d431faa56386c90d1219bf1abb6b89c6609/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:19:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b1f87c9ceff280c15bf3468b0d01d431faa56386c90d1219bf1abb6b89c6609/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:19:52 compute-0 podman[104065]: 2025-12-11 09:19:52.474580778 +0000 UTC m=+0.197742502 container init af5df82a5a4a9b17716ab08500e021507e1990d8903b753e7f8edb9e018471b0 (image=quay.io/ceph/ceph:v19, name=sad_wescoff, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 11 09:19:52 compute-0 podman[104065]: 2025-12-11 09:19:52.481864376 +0000 UTC m=+0.205026110 container start af5df82a5a4a9b17716ab08500e021507e1990d8903b753e7f8edb9e018471b0 (image=quay.io/ceph/ceph:v19, name=sad_wescoff, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:19:52 compute-0 podman[104065]: 2025-12-11 09:19:52.511570244 +0000 UTC m=+0.234731978 container attach af5df82a5a4a9b17716ab08500e021507e1990d8903b753e7f8edb9e018471b0 (image=quay.io/ceph/ceph:v19, name=sad_wescoff, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec 11 09:19:52 compute-0 sad_wescoff[104081]: ERROR: invalid flag --daemon-type
Dec 11 09:19:52 compute-0 systemd[1]: libpod-af5df82a5a4a9b17716ab08500e021507e1990d8903b753e7f8edb9e018471b0.scope: Deactivated successfully.
Dec 11 09:19:52 compute-0 podman[104065]: 2025-12-11 09:19:52.539525488 +0000 UTC m=+0.262687222 container died af5df82a5a4a9b17716ab08500e021507e1990d8903b753e7f8edb9e018471b0 (image=quay.io/ceph/ceph:v19, name=sad_wescoff, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:19:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-2b1f87c9ceff280c15bf3468b0d01d431faa56386c90d1219bf1abb6b89c6609-merged.mount: Deactivated successfully.
Dec 11 09:19:52 compute-0 podman[104065]: 2025-12-11 09:19:52.581421038 +0000 UTC m=+0.304582772 container remove af5df82a5a4a9b17716ab08500e021507e1990d8903b753e7f8edb9e018471b0 (image=quay.io/ceph/ceph:v19, name=sad_wescoff, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:19:52 compute-0 systemd[1]: libpod-conmon-af5df82a5a4a9b17716ab08500e021507e1990d8903b753e7f8edb9e018471b0.scope: Deactivated successfully.
Dec 11 09:19:52 compute-0 sudo[104062]: pam_unix(sudo:session): session closed for user root
Dec 11 09:19:52 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:52 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef58003630 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:53 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:19:53 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:53 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:19:53.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:53 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:19:53 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:53 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:19:53.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:53 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v77: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec 11 09:19:53 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:53 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef50003fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:53 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 11 09:19:53 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 11 09:19:53 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] scanning for idle connections..
Dec 11 09:19:53 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] cleaning up connections: []
Dec 11 09:19:53 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] scanning for idle connections..
Dec 11 09:19:53 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f09cc0640a0>)]
Dec 11 09:19:53 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Dec 11 09:19:53 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] scanning for idle connections..
Dec 11 09:19:53 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f09cc064340>)]
Dec 11 09:19:53 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Dec 11 09:19:53 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:53 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef50003fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:54 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: ::ffff:192.168.122.100 - - [11/Dec/2025:09:19:54] "GET /metrics HTTP/1.1" 200 48223 "" "Prometheus/2.51.0"
Dec 11 09:19:54 compute-0 ceph-mgr[74715]: [prometheus INFO cherrypy.access.139680037156848] ::ffff:192.168.122.100 - - [11/Dec/2025:09:19:54] "GET /metrics HTTP/1.1" 200 48223 "" "Prometheus/2.51.0"
Dec 11 09:19:54 compute-0 ceph-mon[74426]: pgmap v77: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec 11 09:19:54 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 11 09:19:54 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:54 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef50003fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:55 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:19:55 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000031s ======
Dec 11 09:19:55 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:19:55.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 11 09:19:55 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:19:55 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:55 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:19:55.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:55 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v78: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1.2 KiB/s wr, 3 op/s
Dec 11 09:19:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:55 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef58003630 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:55 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef4c001070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:55 compute-0 ceph-mon[74426]: pgmap v78: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1.2 KiB/s wr, 3 op/s
Dec 11 09:19:55 compute-0 ceph-mon[74426]: log_channel(cluster) log [DBG] : mgrmap e36: compute-0.wwpcae(active, since 92s), standbys: compute-2.uiimcn, compute-1.unesvp
Dec 11 09:19:56 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:19:56 compute-0 ceph-mon[74426]: mgrmap e36: compute-0.wwpcae(active, since 92s), standbys: compute-2.uiimcn, compute-1.unesvp
Dec 11 09:19:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:56 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef700032b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:57 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:19:57 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000031s ======
Dec 11 09:19:57 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:19:57.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 11 09:19:57 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:19:57 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:57 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:19:57.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:57 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v79: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 1.2 KiB/s wr, 3 op/s
Dec 11 09:19:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:57 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef50003fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:57 compute-0 sudo[104120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 11 09:19:57 compute-0 sudo[104120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:19:57 compute-0 sudo[104120]: pam_unix(sudo:session): session closed for user root
Dec 11 09:19:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:57 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef58003630 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:58 compute-0 ceph-mon[74426]: pgmap v79: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 1.2 KiB/s wr, 3 op/s
Dec 11 09:19:58 compute-0 sudo[104145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:19:58 compute-0 sudo[104145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:19:58 compute-0 sudo[104145]: pam_unix(sudo:session): session closed for user root
Dec 11 09:19:58 compute-0 sudo[104170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Dec 11 09:19:58 compute-0 sudo[104170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:19:58 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:58 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef4c001070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:58 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-haproxy-nfs-cephfs-compute-0-qtoxfz[95274]: [WARNING] 344/091958 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 11 09:19:58 compute-0 podman[104268]: 2025-12-11 09:19:58.992647524 +0000 UTC m=+0.054032511 container exec 9c08b5b1828392d2cc014af7ec9c415c9c915d4b6dd9798da9403e6929851bd4 (image=quay.io/ceph/ceph:v19, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mon-compute-0, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 11 09:19:59 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:19:59 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 11 09:19:59 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:19:59.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 11 09:19:59 compute-0 podman[104268]: 2025-12-11 09:19:59.107701591 +0000 UTC m=+0.169086548 container exec_died 9c08b5b1828392d2cc014af7ec9c415c9c915d4b6dd9798da9403e6929851bd4 (image=quay.io/ceph/ceph:v19, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:19:59 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:19:59 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:59 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:19:59.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:59 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v80: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 1.2 KiB/s wr, 3 op/s
Dec 11 09:19:59 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:59 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef700032b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:59 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:19:59 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef50003fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:59 compute-0 podman[104387]: 2025-12-11 09:19:59.597878934 +0000 UTC m=+0.060570905 container exec 71c011c91ff6298c4d680d36893ef7d2b662adac0c087e7fdac0eacd23df3a9f (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 09:19:59 compute-0 podman[104387]: 2025-12-11 09:19:59.611586052 +0000 UTC m=+0.074278003 container exec_died 71c011c91ff6298c4d680d36893ef7d2b662adac0c087e7fdac0eacd23df3a9f (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 09:19:59 compute-0 podman[104476]: 2025-12-11 09:19:59.969284374 +0000 UTC m=+0.056232949 container exec b054262215ee01c3adca2deaedf76efed99f7a0a45370b1e29ea68ac174c96fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Dec 11 09:19:59 compute-0 podman[104476]: 2025-12-11 09:19:59.979135212 +0000 UTC m=+0.066083787 container exec_died b054262215ee01c3adca2deaedf76efed99f7a0a45370b1e29ea68ac174c96fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:20:00 compute-0 ceph-mon[74426]: log_channel(cluster) log [INF] : overall HEALTH_OK
Dec 11 09:20:00 compute-0 ceph-mon[74426]: pgmap v80: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 1.2 KiB/s wr, 3 op/s
Dec 11 09:20:00 compute-0 ceph-mon[74426]: overall HEALTH_OK
Dec 11 09:20:00 compute-0 podman[104540]: 2025-12-11 09:20:00.201643498 +0000 UTC m=+0.056683204 container exec 67bfd387bd504b410155709fb8f34d841ddb874b3571550320b1a27fbc3dea08 (image=quay.io/ceph/haproxy:2.3, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-haproxy-nfs-cephfs-compute-0-qtoxfz)
Dec 11 09:20:00 compute-0 podman[104540]: 2025-12-11 09:20:00.213799068 +0000 UTC m=+0.068838754 container exec_died 67bfd387bd504b410155709fb8f34d841ddb874b3571550320b1a27fbc3dea08 (image=quay.io/ceph/haproxy:2.3, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-haproxy-nfs-cephfs-compute-0-qtoxfz)
Dec 11 09:20:00 compute-0 podman[104603]: 2025-12-11 09:20:00.429057337 +0000 UTC m=+0.055395273 container exec 2552e27573042b86043f6eb7d85be1b045b99c64a99b48fb64027534fb48d8b4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-keepalived-nfs-cephfs-compute-0-ewssxv, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, io.openshift.expose-services=, architecture=x86_64, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, release=1793, vendor=Red Hat, Inc., description=keepalived for Ceph, distribution-scope=public, io.openshift.tags=Ceph keepalived, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, io.k8s.display-name=Keepalived on RHEL 9)
Dec 11 09:20:00 compute-0 podman[104603]: 2025-12-11 09:20:00.446711349 +0000 UTC m=+0.073049235 container exec_died 2552e27573042b86043f6eb7d85be1b045b99c64a99b48fb64027534fb48d8b4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-keepalived-nfs-cephfs-compute-0-ewssxv, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, vcs-type=git, io.openshift.tags=Ceph keepalived, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, vendor=Red Hat, Inc., description=keepalived for Ceph, distribution-scope=public, com.redhat.component=keepalived-container)
Dec 11 09:20:00 compute-0 podman[104669]: 2025-12-11 09:20:00.655611669 +0000 UTC m=+0.056825568 container exec 3b9e0073af5e4c7c2b471c925a825b813ba9ef3bc5645cf6b5c090ee432c8a5d (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 09:20:00 compute-0 podman[104669]: 2025-12-11 09:20:00.692448051 +0000 UTC m=+0.093661920 container exec_died 3b9e0073af5e4c7c2b471c925a825b813ba9ef3bc5645cf6b5c090ee432c8a5d (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 09:20:00 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:00 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef58003630 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:00 compute-0 podman[104745]: 2025-12-11 09:20:00.926310951 +0000 UTC m=+0.062731152 container exec ac5286c4a64ed2433117d537e938dca0aaa6db8529e224a2e9020f17a5de7e45 (image=quay.io/ceph/grafana:10.4.0, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 11 09:20:01 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:20:01 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 11 09:20:01 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:20:01.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 11 09:20:01 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:20:01 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:01 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:20:01.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:01 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v81: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 852 B/s wr, 2 op/s
Dec 11 09:20:01 compute-0 podman[104745]: 2025-12-11 09:20:01.151999717 +0000 UTC m=+0.288419918 container exec_died ac5286c4a64ed2433117d537e938dca0aaa6db8529e224a2e9020f17a5de7e45 (image=quay.io/ceph/grafana:10.4.0, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 11 09:20:01 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:01 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef4c001070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:01 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:20:01 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:01 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef70001ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:01 compute-0 podman[104859]: 2025-12-11 09:20:01.523216841 +0000 UTC m=+0.056294721 container exec 90c28ad7bc328840534f9285d34005394d82c163694e46ece6c6c1f2d03e7fe2 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 09:20:01 compute-0 podman[104859]: 2025-12-11 09:20:01.595479709 +0000 UTC m=+0.128557569 container exec_died 90c28ad7bc328840534f9285d34005394d82c163694e46ece6c6c1f2d03e7fe2 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 09:20:01 compute-0 sudo[104170]: pam_unix(sudo:session): session closed for user root
Dec 11 09:20:01 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 11 09:20:01 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:20:01 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 11 09:20:01 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:20:01 compute-0 sudo[104901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:20:01 compute-0 sudo[104901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:20:01 compute-0 sudo[104901]: pam_unix(sudo:session): session closed for user root
Dec 11 09:20:01 compute-0 sudo[104926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 11 09:20:01 compute-0 sudo[104926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:20:02 compute-0 sudo[104926]: pam_unix(sudo:session): session closed for user root
Dec 11 09:20:02 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 11 09:20:02 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:20:02 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 11 09:20:02 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 11 09:20:02 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 11 09:20:02 compute-0 ceph-mon[74426]: pgmap v81: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 852 B/s wr, 2 op/s
Dec 11 09:20:02 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:20:02 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:20:02 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:20:02 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 11 09:20:02 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:20:02 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 11 09:20:02 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 11 09:20:02 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 11 09:20:02 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 11 09:20:02 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 11 09:20:02 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:20:02 compute-0 sudo[104981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:20:02 compute-0 sudo[104981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:20:02 compute-0 sudo[104981]: pam_unix(sudo:session): session closed for user root
Dec 11 09:20:02 compute-0 sudo[105006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 11 09:20:02 compute-0 sudo[105006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:20:02 compute-0 sudo[105055]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqaaqjcoyvajbnpyoynawvpoyukfxopc ; /usr/bin/python3'
Dec 11 09:20:02 compute-0 sudo[105055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:20:02 compute-0 python3[105057]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps --daemon-type rgw --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:20:02 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:02 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef50003fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:02 compute-0 podman[105070]: 2025-12-11 09:20:02.923076841 +0000 UTC m=+0.052209663 container create 707f60d8416840fcac990be84daf5b5767782a1640b28df34bdc65df2dc6da16 (image=quay.io/ceph/ceph:v19, name=friendly_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 11 09:20:02 compute-0 systemd[1]: Started libpod-conmon-707f60d8416840fcac990be84daf5b5767782a1640b28df34bdc65df2dc6da16.scope.
Dec 11 09:20:02 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:20:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5b3639b80c2c9bfe45c3af05905194461d7ef99e3178f2acfaf3d595575f080/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:20:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5b3639b80c2c9bfe45c3af05905194461d7ef99e3178f2acfaf3d595575f080/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:20:02 compute-0 podman[105070]: 2025-12-11 09:20:02.900101983 +0000 UTC m=+0.029234825 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:20:03 compute-0 podman[105070]: 2025-12-11 09:20:03.004189337 +0000 UTC m=+0.133322179 container init 707f60d8416840fcac990be84daf5b5767782a1640b28df34bdc65df2dc6da16 (image=quay.io/ceph/ceph:v19, name=friendly_burnell, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec 11 09:20:03 compute-0 podman[105070]: 2025-12-11 09:20:03.01229785 +0000 UTC m=+0.141430672 container start 707f60d8416840fcac990be84daf5b5767782a1640b28df34bdc65df2dc6da16 (image=quay.io/ceph/ceph:v19, name=friendly_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:20:03 compute-0 podman[105070]: 2025-12-11 09:20:03.015879442 +0000 UTC m=+0.145012284 container attach 707f60d8416840fcac990be84daf5b5767782a1640b28df34bdc65df2dc6da16 (image=quay.io/ceph/ceph:v19, name=friendly_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True)
Dec 11 09:20:03 compute-0 friendly_burnell[105106]: ERROR: invalid flag --daemon-type
Dec 11 09:20:03 compute-0 systemd[1]: libpod-707f60d8416840fcac990be84daf5b5767782a1640b28df34bdc65df2dc6da16.scope: Deactivated successfully.
Dec 11 09:20:03 compute-0 podman[105116]: 2025-12-11 09:20:03.092872219 +0000 UTC m=+0.043278644 container create 1386a146dc32ddfcbdcd7a78eeaa44c5faeb491678cd7d5d65a3f8ced300ad92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:20:03 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:20:03 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 11 09:20:03 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:20:03.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 11 09:20:03 compute-0 podman[105146]: 2025-12-11 09:20:03.120970467 +0000 UTC m=+0.032826377 container died 707f60d8416840fcac990be84daf5b5767782a1640b28df34bdc65df2dc6da16 (image=quay.io/ceph/ceph:v19, name=friendly_burnell, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 11 09:20:03 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:20:03 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 11 09:20:03 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:20:03.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 11 09:20:03 compute-0 systemd[1]: Started libpod-conmon-1386a146dc32ddfcbdcd7a78eeaa44c5faeb491678cd7d5d65a3f8ced300ad92.scope.
Dec 11 09:20:03 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v82: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 681 B/s wr, 2 op/s
Dec 11 09:20:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-e5b3639b80c2c9bfe45c3af05905194461d7ef99e3178f2acfaf3d595575f080-merged.mount: Deactivated successfully.
Dec 11 09:20:03 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:20:03 compute-0 podman[105146]: 2025-12-11 09:20:03.160426341 +0000 UTC m=+0.072282261 container remove 707f60d8416840fcac990be84daf5b5767782a1640b28df34bdc65df2dc6da16 (image=quay.io/ceph/ceph:v19, name=friendly_burnell, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec 11 09:20:03 compute-0 systemd[1]: libpod-conmon-707f60d8416840fcac990be84daf5b5767782a1640b28df34bdc65df2dc6da16.scope: Deactivated successfully.
Dec 11 09:20:03 compute-0 podman[105116]: 2025-12-11 09:20:03.074733072 +0000 UTC m=+0.025139527 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:20:03 compute-0 podman[105116]: 2025-12-11 09:20:03.176634808 +0000 UTC m=+0.127041263 container init 1386a146dc32ddfcbdcd7a78eeaa44c5faeb491678cd7d5d65a3f8ced300ad92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:20:03 compute-0 podman[105116]: 2025-12-11 09:20:03.183642527 +0000 UTC m=+0.134048952 container start 1386a146dc32ddfcbdcd7a78eeaa44c5faeb491678cd7d5d65a3f8ced300ad92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_knuth, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True)
Dec 11 09:20:03 compute-0 systemd[1]: libpod-1386a146dc32ddfcbdcd7a78eeaa44c5faeb491678cd7d5d65a3f8ced300ad92.scope: Deactivated successfully.
Dec 11 09:20:03 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:03 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef58003630 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:03 compute-0 youthful_knuth[105161]: 167 167
Dec 11 09:20:03 compute-0 podman[105116]: 2025-12-11 09:20:03.187107895 +0000 UTC m=+0.137514340 container attach 1386a146dc32ddfcbdcd7a78eeaa44c5faeb491678cd7d5d65a3f8ced300ad92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_knuth, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True)
Dec 11 09:20:03 compute-0 sudo[105055]: pam_unix(sudo:session): session closed for user root
Dec 11 09:20:03 compute-0 conmon[105161]: conmon 1386a146dc32ddfcbdcd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1386a146dc32ddfcbdcd7a78eeaa44c5faeb491678cd7d5d65a3f8ced300ad92.scope/container/memory.events
Dec 11 09:20:03 compute-0 podman[105116]: 2025-12-11 09:20:03.190255874 +0000 UTC m=+0.140662299 container died 1386a146dc32ddfcbdcd7a78eeaa44c5faeb491678cd7d5d65a3f8ced300ad92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True)
Dec 11 09:20:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-b02048197edf94ceaeb79680e875df909d9b7e0cdc1ca10dedaa48dcca4102f9-merged.mount: Deactivated successfully.
Dec 11 09:20:03 compute-0 podman[105116]: 2025-12-11 09:20:03.227372843 +0000 UTC m=+0.177779268 container remove 1386a146dc32ddfcbdcd7a78eeaa44c5faeb491678cd7d5d65a3f8ced300ad92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_knuth, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:20:03 compute-0 systemd[1]: libpod-conmon-1386a146dc32ddfcbdcd7a78eeaa44c5faeb491678cd7d5d65a3f8ced300ad92.scope: Deactivated successfully.
Dec 11 09:20:03 compute-0 podman[105188]: 2025-12-11 09:20:03.450460338 +0000 UTC m=+0.106541752 container create 74397c8cbc315a4aac6ee9a700ee77026ab90b08961f00191230acb5f6080e39 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Dec 11 09:20:03 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:03 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef4c001070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:03 compute-0 systemd[1]: Started libpod-conmon-74397c8cbc315a4aac6ee9a700ee77026ab90b08961f00191230acb5f6080e39.scope.
Dec 11 09:20:03 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:20:03 compute-0 podman[105188]: 2025-12-11 09:20:03.429188502 +0000 UTC m=+0.085269926 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:20:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/350831c239ffcc380ac6a153b4504d66ece33488df29d0180ef135dc51068d9d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 11 09:20:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/350831c239ffcc380ac6a153b4504d66ece33488df29d0180ef135dc51068d9d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:20:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/350831c239ffcc380ac6a153b4504d66ece33488df29d0180ef135dc51068d9d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:20:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/350831c239ffcc380ac6a153b4504d66ece33488df29d0180ef135dc51068d9d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 11 09:20:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/350831c239ffcc380ac6a153b4504d66ece33488df29d0180ef135dc51068d9d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 11 09:20:03 compute-0 podman[105188]: 2025-12-11 09:20:03.539566633 +0000 UTC m=+0.195648077 container init 74397c8cbc315a4aac6ee9a700ee77026ab90b08961f00191230acb5f6080e39 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_blackwell, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 11 09:20:03 compute-0 podman[105188]: 2025-12-11 09:20:03.547509141 +0000 UTC m=+0.203590555 container start 74397c8cbc315a4aac6ee9a700ee77026ab90b08961f00191230acb5f6080e39 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_blackwell, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:20:03 compute-0 podman[105188]: 2025-12-11 09:20:03.551354952 +0000 UTC m=+0.207436386 container attach 74397c8cbc315a4aac6ee9a700ee77026ab90b08961f00191230acb5f6080e39 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_blackwell, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:20:03 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:20:03 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 11 09:20:03 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:20:03 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:20:03 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 11 09:20:03 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 11 09:20:03 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:20:03 compute-0 ceph-mon[74426]: pgmap v82: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 681 B/s wr, 2 op/s
Dec 11 09:20:03 compute-0 sleepy_blackwell[105205]: --> passed data devices: 0 physical, 1 LVM
Dec 11 09:20:03 compute-0 sleepy_blackwell[105205]: --> All data devices are unavailable
Dec 11 09:20:03 compute-0 systemd[1]: libpod-74397c8cbc315a4aac6ee9a700ee77026ab90b08961f00191230acb5f6080e39.scope: Deactivated successfully.
Dec 11 09:20:03 compute-0 podman[105188]: 2025-12-11 09:20:03.936875443 +0000 UTC m=+0.592956857 container died 74397c8cbc315a4aac6ee9a700ee77026ab90b08961f00191230acb5f6080e39 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325)
Dec 11 09:20:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-350831c239ffcc380ac6a153b4504d66ece33488df29d0180ef135dc51068d9d-merged.mount: Deactivated successfully.
Dec 11 09:20:04 compute-0 podman[105188]: 2025-12-11 09:20:04.021724336 +0000 UTC m=+0.677805740 container remove 74397c8cbc315a4aac6ee9a700ee77026ab90b08961f00191230acb5f6080e39 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_blackwell, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Dec 11 09:20:04 compute-0 systemd[1]: libpod-conmon-74397c8cbc315a4aac6ee9a700ee77026ab90b08961f00191230acb5f6080e39.scope: Deactivated successfully.
Dec 11 09:20:04 compute-0 sudo[105006]: pam_unix(sudo:session): session closed for user root
Dec 11 09:20:04 compute-0 sudo[105229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:20:04 compute-0 sudo[105229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:20:04 compute-0 sudo[105229]: pam_unix(sudo:session): session closed for user root
Dec 11 09:20:04 compute-0 sudo[105254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -- lvm list --format json
Dec 11 09:20:04 compute-0 sudo[105254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:20:04 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: ::ffff:192.168.122.100 - - [11/Dec/2025:09:20:04] "GET /metrics HTTP/1.1" 200 48223 "" "Prometheus/2.51.0"
Dec 11 09:20:04 compute-0 ceph-mgr[74715]: [prometheus INFO cherrypy.access.139680037156848] ::ffff:192.168.122.100 - - [11/Dec/2025:09:20:04] "GET /metrics HTTP/1.1" 200 48223 "" "Prometheus/2.51.0"
Dec 11 09:20:04 compute-0 podman[105317]: 2025-12-11 09:20:04.663347332 +0000 UTC m=+0.052192982 container create 159bb6ffe112055039f581e8613a0e22d173eb211f87a6709a632a590267ff97 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 11 09:20:04 compute-0 systemd[1]: Started libpod-conmon-159bb6ffe112055039f581e8613a0e22d173eb211f87a6709a632a590267ff97.scope.
Dec 11 09:20:04 compute-0 podman[105317]: 2025-12-11 09:20:04.642046697 +0000 UTC m=+0.030892377 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:20:04 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:20:04 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:04 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef4c001070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:04 compute-0 podman[105317]: 2025-12-11 09:20:04.916929141 +0000 UTC m=+0.305774811 container init 159bb6ffe112055039f581e8613a0e22d173eb211f87a6709a632a590267ff97 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_gagarin, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:20:04 compute-0 podman[105317]: 2025-12-11 09:20:04.927409618 +0000 UTC m=+0.316255288 container start 159bb6ffe112055039f581e8613a0e22d173eb211f87a6709a632a590267ff97 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_gagarin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:20:04 compute-0 fervent_gagarin[105334]: 167 167
Dec 11 09:20:04 compute-0 systemd[1]: libpod-159bb6ffe112055039f581e8613a0e22d173eb211f87a6709a632a590267ff97.scope: Deactivated successfully.
Dec 11 09:20:04 compute-0 podman[105317]: 2025-12-11 09:20:04.956877259 +0000 UTC m=+0.345722939 container attach 159bb6ffe112055039f581e8613a0e22d173eb211f87a6709a632a590267ff97 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_gagarin, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:20:04 compute-0 podman[105317]: 2025-12-11 09:20:04.957602471 +0000 UTC m=+0.346448131 container died 159bb6ffe112055039f581e8613a0e22d173eb211f87a6709a632a590267ff97 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_gagarin, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:20:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e9ec0620d4483982bcee59e165d3d9d562cbcf6fbe54c715ba20cd6eb23e763-merged.mount: Deactivated successfully.
Dec 11 09:20:05 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:20:05 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:05 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:20:05.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:05 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:20:05 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:05 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:20:05.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:05 compute-0 podman[105317]: 2025-12-11 09:20:05.127239735 +0000 UTC m=+0.516085385 container remove 159bb6ffe112055039f581e8613a0e22d173eb211f87a6709a632a590267ff97 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_gagarin, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec 11 09:20:05 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v83: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 681 B/s wr, 2 op/s
Dec 11 09:20:05 compute-0 systemd[1]: libpod-conmon-159bb6ffe112055039f581e8613a0e22d173eb211f87a6709a632a590267ff97.scope: Deactivated successfully.
Dec 11 09:20:05 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:05 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef50003fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:05 compute-0 podman[105361]: 2025-12-11 09:20:05.317366608 +0000 UTC m=+0.053513744 container create 918a8eff6b5a5eb82567c09257d64c8481d0b741570ec19dfdc7b13acb28c8e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_varahamihira, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:20:05 compute-0 systemd[1]: Started libpod-conmon-918a8eff6b5a5eb82567c09257d64c8481d0b741570ec19dfdc7b13acb28c8e0.scope.
Dec 11 09:20:05 compute-0 podman[105361]: 2025-12-11 09:20:05.295572797 +0000 UTC m=+0.031719963 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:20:05 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:20:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82dd1dc6703cff98302422a5af93b08ebaae7355f6a7ee55cf48825dd6f2c89f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 11 09:20:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82dd1dc6703cff98302422a5af93b08ebaae7355f6a7ee55cf48825dd6f2c89f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:20:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82dd1dc6703cff98302422a5af93b08ebaae7355f6a7ee55cf48825dd6f2c89f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:20:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82dd1dc6703cff98302422a5af93b08ebaae7355f6a7ee55cf48825dd6f2c89f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 11 09:20:05 compute-0 podman[105361]: 2025-12-11 09:20:05.420530613 +0000 UTC m=+0.156677769 container init 918a8eff6b5a5eb82567c09257d64c8481d0b741570ec19dfdc7b13acb28c8e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_varahamihira, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:20:05 compute-0 podman[105361]: 2025-12-11 09:20:05.432755436 +0000 UTC m=+0.168902572 container start 918a8eff6b5a5eb82567c09257d64c8481d0b741570ec19dfdc7b13acb28c8e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:20:05 compute-0 podman[105361]: 2025-12-11 09:20:05.436664147 +0000 UTC m=+0.172811283 container attach 918a8eff6b5a5eb82567c09257d64c8481d0b741570ec19dfdc7b13acb28c8e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 11 09:20:05 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:05 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef70001ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:05 compute-0 compassionate_varahamihira[105377]: {
Dec 11 09:20:05 compute-0 compassionate_varahamihira[105377]:     "1": [
Dec 11 09:20:05 compute-0 compassionate_varahamihira[105377]:         {
Dec 11 09:20:05 compute-0 compassionate_varahamihira[105377]:             "devices": [
Dec 11 09:20:05 compute-0 compassionate_varahamihira[105377]:                 "/dev/loop3"
Dec 11 09:20:05 compute-0 compassionate_varahamihira[105377]:             ],
Dec 11 09:20:05 compute-0 compassionate_varahamihira[105377]:             "lv_name": "ceph_lv0",
Dec 11 09:20:05 compute-0 compassionate_varahamihira[105377]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 11 09:20:05 compute-0 compassionate_varahamihira[105377]:             "lv_size": "21470642176",
Dec 11 09:20:05 compute-0 compassionate_varahamihira[105377]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TIjPv6-JN4c-U1Hf-F3k1-2mgj-CwGi-pbjIHh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=865a308b-cdc3-4034-b5eb-feb596b462bf,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 11 09:20:05 compute-0 compassionate_varahamihira[105377]:             "lv_uuid": "TIjPv6-JN4c-U1Hf-F3k1-2mgj-CwGi-pbjIHh",
Dec 11 09:20:05 compute-0 compassionate_varahamihira[105377]:             "name": "ceph_lv0",
Dec 11 09:20:05 compute-0 compassionate_varahamihira[105377]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 11 09:20:05 compute-0 compassionate_varahamihira[105377]:             "tags": {
Dec 11 09:20:05 compute-0 compassionate_varahamihira[105377]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 11 09:20:05 compute-0 compassionate_varahamihira[105377]:                 "ceph.block_uuid": "TIjPv6-JN4c-U1Hf-F3k1-2mgj-CwGi-pbjIHh",
Dec 11 09:20:05 compute-0 compassionate_varahamihira[105377]:                 "ceph.cephx_lockbox_secret": "",
Dec 11 09:20:05 compute-0 compassionate_varahamihira[105377]:                 "ceph.cluster_fsid": "31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060",
Dec 11 09:20:05 compute-0 compassionate_varahamihira[105377]:                 "ceph.cluster_name": "ceph",
Dec 11 09:20:05 compute-0 compassionate_varahamihira[105377]:                 "ceph.crush_device_class": "",
Dec 11 09:20:05 compute-0 compassionate_varahamihira[105377]:                 "ceph.encrypted": "0",
Dec 11 09:20:05 compute-0 compassionate_varahamihira[105377]:                 "ceph.osd_fsid": "865a308b-cdc3-4034-b5eb-feb596b462bf",
Dec 11 09:20:05 compute-0 compassionate_varahamihira[105377]:                 "ceph.osd_id": "1",
Dec 11 09:20:05 compute-0 compassionate_varahamihira[105377]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 11 09:20:05 compute-0 compassionate_varahamihira[105377]:                 "ceph.type": "block",
Dec 11 09:20:05 compute-0 compassionate_varahamihira[105377]:                 "ceph.vdo": "0",
Dec 11 09:20:05 compute-0 compassionate_varahamihira[105377]:                 "ceph.with_tpm": "0"
Dec 11 09:20:05 compute-0 compassionate_varahamihira[105377]:             },
Dec 11 09:20:05 compute-0 compassionate_varahamihira[105377]:             "type": "block",
Dec 11 09:20:05 compute-0 compassionate_varahamihira[105377]:             "vg_name": "ceph_vg0"
Dec 11 09:20:05 compute-0 compassionate_varahamihira[105377]:         }
Dec 11 09:20:05 compute-0 compassionate_varahamihira[105377]:     ]
Dec 11 09:20:05 compute-0 compassionate_varahamihira[105377]: }
Dec 11 09:20:05 compute-0 systemd[1]: libpod-918a8eff6b5a5eb82567c09257d64c8481d0b741570ec19dfdc7b13acb28c8e0.scope: Deactivated successfully.
Dec 11 09:20:05 compute-0 podman[105361]: 2025-12-11 09:20:05.807882072 +0000 UTC m=+0.544029228 container died 918a8eff6b5a5eb82567c09257d64c8481d0b741570ec19dfdc7b13acb28c8e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_varahamihira, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:20:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-82dd1dc6703cff98302422a5af93b08ebaae7355f6a7ee55cf48825dd6f2c89f-merged.mount: Deactivated successfully.
Dec 11 09:20:05 compute-0 podman[105361]: 2025-12-11 09:20:05.865822743 +0000 UTC m=+0.601969879 container remove 918a8eff6b5a5eb82567c09257d64c8481d0b741570ec19dfdc7b13acb28c8e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_varahamihira, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:20:05 compute-0 systemd[1]: libpod-conmon-918a8eff6b5a5eb82567c09257d64c8481d0b741570ec19dfdc7b13acb28c8e0.scope: Deactivated successfully.
Dec 11 09:20:05 compute-0 ceph-mon[74426]: pgmap v83: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 681 B/s wr, 2 op/s
Dec 11 09:20:05 compute-0 sudo[105254]: pam_unix(sudo:session): session closed for user root
Dec 11 09:20:06 compute-0 sudo[105398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:20:06 compute-0 sudo[105398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:20:06 compute-0 sudo[105398]: pam_unix(sudo:session): session closed for user root
Dec 11 09:20:06 compute-0 sudo[105423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -- raw list --format json
Dec 11 09:20:06 compute-0 sudo[105423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:20:06 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:20:06 compute-0 podman[105488]: 2025-12-11 09:20:06.636478359 +0000 UTC m=+0.062453883 container create e740c7f4b8674b12f7e4e04975330cadf75a0cf63ee96f6a984392317ca9d9be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_babbage, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:20:06 compute-0 systemd[1]: Started libpod-conmon-e740c7f4b8674b12f7e4e04975330cadf75a0cf63ee96f6a984392317ca9d9be.scope.
Dec 11 09:20:06 compute-0 podman[105488]: 2025-12-11 09:20:06.609302457 +0000 UTC m=+0.035278341 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:20:06 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:20:06 compute-0 podman[105488]: 2025-12-11 09:20:06.735197451 +0000 UTC m=+0.161172975 container init e740c7f4b8674b12f7e4e04975330cadf75a0cf63ee96f6a984392317ca9d9be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 11 09:20:06 compute-0 podman[105488]: 2025-12-11 09:20:06.743394501 +0000 UTC m=+0.169370025 container start e740c7f4b8674b12f7e4e04975330cadf75a0cf63ee96f6a984392317ca9d9be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 11 09:20:06 compute-0 podman[105488]: 2025-12-11 09:20:06.747615806 +0000 UTC m=+0.173591340 container attach e740c7f4b8674b12f7e4e04975330cadf75a0cf63ee96f6a984392317ca9d9be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_babbage, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:20:06 compute-0 elastic_babbage[105505]: 167 167
Dec 11 09:20:06 compute-0 systemd[1]: libpod-e740c7f4b8674b12f7e4e04975330cadf75a0cf63ee96f6a984392317ca9d9be.scope: Deactivated successfully.
Dec 11 09:20:06 compute-0 podman[105488]: 2025-12-11 09:20:06.751450648 +0000 UTC m=+0.177426172 container died e740c7f4b8674b12f7e4e04975330cadf75a0cf63ee96f6a984392317ca9d9be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True)
Dec 11 09:20:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f8efd535ee7d52f2cb78939247ec042d62b64088b3f2c0ff9fb93c7b2eec36c-merged.mount: Deactivated successfully.
Dec 11 09:20:06 compute-0 podman[105488]: 2025-12-11 09:20:06.795785624 +0000 UTC m=+0.221761148 container remove e740c7f4b8674b12f7e4e04975330cadf75a0cf63ee96f6a984392317ca9d9be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_babbage, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:20:06 compute-0 systemd[1]: libpod-conmon-e740c7f4b8674b12f7e4e04975330cadf75a0cf63ee96f6a984392317ca9d9be.scope: Deactivated successfully.
Dec 11 09:20:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:06 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef58003630 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:07 compute-0 podman[105528]: 2025-12-11 09:20:07.000465939 +0000 UTC m=+0.054832471 container create 7ec2cadbbe918894a97ecfbf161d26fb6efeec891c7dcfe5324a72faf3b5f4db (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 11 09:20:07 compute-0 systemd[1]: Started libpod-conmon-7ec2cadbbe918894a97ecfbf161d26fb6efeec891c7dcfe5324a72faf3b5f4db.scope.
Dec 11 09:20:07 compute-0 podman[105528]: 2025-12-11 09:20:06.976944132 +0000 UTC m=+0.031310684 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:20:07 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:20:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/851aabf23c2535ff2f179c3d3d0f34a219f0e845f1d2175d98ad6fa4b4818873/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 11 09:20:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/851aabf23c2535ff2f179c3d3d0f34a219f0e845f1d2175d98ad6fa4b4818873/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:20:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/851aabf23c2535ff2f179c3d3d0f34a219f0e845f1d2175d98ad6fa4b4818873/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:20:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/851aabf23c2535ff2f179c3d3d0f34a219f0e845f1d2175d98ad6fa4b4818873/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 11 09:20:07 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:20:07 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 11 09:20:07 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:20:07.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 11 09:20:07 compute-0 podman[105528]: 2025-12-11 09:20:07.11587142 +0000 UTC m=+0.170237972 container init 7ec2cadbbe918894a97ecfbf161d26fb6efeec891c7dcfe5324a72faf3b5f4db (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_fermat, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:20:07 compute-0 podman[105528]: 2025-12-11 09:20:07.127008993 +0000 UTC m=+0.181375515 container start 7ec2cadbbe918894a97ecfbf161d26fb6efeec891c7dcfe5324a72faf3b5f4db (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_fermat, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:20:07 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:20:07 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:07 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:20:07.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:07 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v84: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec 11 09:20:07 compute-0 podman[105528]: 2025-12-11 09:20:07.142714932 +0000 UTC m=+0.197081454 container attach 7ec2cadbbe918894a97ecfbf161d26fb6efeec891c7dcfe5324a72faf3b5f4db (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_fermat, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 11 09:20:07 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:07 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef4c001070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:07 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:07 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef50003fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:07 compute-0 lvm[105620]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 11 09:20:07 compute-0 lvm[105620]: VG ceph_vg0 finished
Dec 11 09:20:07 compute-0 musing_fermat[105546]: {}
Dec 11 09:20:07 compute-0 systemd[1]: libpod-7ec2cadbbe918894a97ecfbf161d26fb6efeec891c7dcfe5324a72faf3b5f4db.scope: Deactivated successfully.
Dec 11 09:20:07 compute-0 systemd[1]: libpod-7ec2cadbbe918894a97ecfbf161d26fb6efeec891c7dcfe5324a72faf3b5f4db.scope: Consumed 1.443s CPU time.
Dec 11 09:20:08 compute-0 podman[105624]: 2025-12-11 09:20:08.046283453 +0000 UTC m=+0.032687928 container died 7ec2cadbbe918894a97ecfbf161d26fb6efeec891c7dcfe5324a72faf3b5f4db (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:20:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-851aabf23c2535ff2f179c3d3d0f34a219f0e845f1d2175d98ad6fa4b4818873-merged.mount: Deactivated successfully.
Dec 11 09:20:08 compute-0 podman[105624]: 2025-12-11 09:20:08.095444303 +0000 UTC m=+0.081848748 container remove 7ec2cadbbe918894a97ecfbf161d26fb6efeec891c7dcfe5324a72faf3b5f4db (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 11 09:20:08 compute-0 systemd[1]: libpod-conmon-7ec2cadbbe918894a97ecfbf161d26fb6efeec891c7dcfe5324a72faf3b5f4db.scope: Deactivated successfully.
Dec 11 09:20:08 compute-0 sudo[105423]: pam_unix(sudo:session): session closed for user root
Dec 11 09:20:08 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 11 09:20:08 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 11 09:20:08 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 11 09:20:08 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:20:08 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 11 09:20:08 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:20:08 compute-0 ceph-mon[74426]: pgmap v84: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec 11 09:20:08 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 11 09:20:08 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:20:08 compute-0 sudo[105639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 11 09:20:08 compute-0 sudo[105639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:20:08 compute-0 sudo[105639]: pam_unix(sudo:session): session closed for user root
Dec 11 09:20:08 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:08 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef4c001070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:09 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:20:09 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 11 09:20:09 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:20:09.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 11 09:20:09 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v85: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 340 B/s rd, 0 B/s wr, 0 op/s
Dec 11 09:20:09 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:20:09 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:09 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:20:09.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:09 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:09 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef4c001070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:09 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:09 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef58003630 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:09 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:20:09 compute-0 ceph-mon[74426]: pgmap v85: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 340 B/s rd, 0 B/s wr, 0 op/s
Dec 11 09:20:10 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:10 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef50003fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:11 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:20:11 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:11 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:20:11.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:11 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v86: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec 11 09:20:11 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:20:11 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:11 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:20:11.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:11 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:11 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef50003fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:11 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:20:11 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:11 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef58003630 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:12 compute-0 ceph-mon[74426]: pgmap v86: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec 11 09:20:12 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:12 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef58003630 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:13 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:20:13 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:13 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:20:13.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:13 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v87: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:20:13 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:20:13 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:13 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:20:13.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:13 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:13 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef4c001070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:13 compute-0 sudo[105693]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hubvjlzrcanrtwhttecvmyogalqozyba ; /usr/bin/python3'
Dec 11 09:20:13 compute-0 sudo[105693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:20:13 compute-0 python3[105695]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps --daemon-type rgw --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:20:13 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:13 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef50003fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:13 compute-0 podman[105696]: 2025-12-11 09:20:13.504838286 +0000 UTC m=+0.051861906 container create ad37ca7aef37036f64fa3d8a9e33acca15fccd88189385ee98e5fb4457a09cae (image=quay.io/ceph/ceph:v19, name=focused_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec 11 09:20:13 compute-0 systemd[1]: Started libpod-conmon-ad37ca7aef37036f64fa3d8a9e33acca15fccd88189385ee98e5fb4457a09cae.scope.
Dec 11 09:20:13 compute-0 podman[105696]: 2025-12-11 09:20:13.486751983 +0000 UTC m=+0.033775643 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:20:13 compute-0 ceph-mon[74426]: pgmap v87: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:20:13 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:20:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83851f0ac4a7a85512f540a1803c7de035a3946d3bf9e4f6e16df517425aba32/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:20:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83851f0ac4a7a85512f540a1803c7de035a3946d3bf9e4f6e16df517425aba32/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:20:13 compute-0 podman[105696]: 2025-12-11 09:20:13.629815942 +0000 UTC m=+0.176839602 container init ad37ca7aef37036f64fa3d8a9e33acca15fccd88189385ee98e5fb4457a09cae (image=quay.io/ceph/ceph:v19, name=focused_perlman, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid)
Dec 11 09:20:13 compute-0 podman[105696]: 2025-12-11 09:20:13.639291882 +0000 UTC m=+0.186315512 container start ad37ca7aef37036f64fa3d8a9e33acca15fccd88189385ee98e5fb4457a09cae (image=quay.io/ceph/ceph:v19, name=focused_perlman, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:20:13 compute-0 podman[105696]: 2025-12-11 09:20:13.666932129 +0000 UTC m=+0.213955759 container attach ad37ca7aef37036f64fa3d8a9e33acca15fccd88189385ee98e5fb4457a09cae (image=quay.io/ceph/ceph:v19, name=focused_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 11 09:20:13 compute-0 focused_perlman[105712]: ERROR: invalid flag --daemon-type
Dec 11 09:20:13 compute-0 systemd[1]: libpod-ad37ca7aef37036f64fa3d8a9e33acca15fccd88189385ee98e5fb4457a09cae.scope: Deactivated successfully.
Dec 11 09:20:13 compute-0 podman[105696]: 2025-12-11 09:20:13.703150338 +0000 UTC m=+0.250173998 container died ad37ca7aef37036f64fa3d8a9e33acca15fccd88189385ee98e5fb4457a09cae (image=quay.io/ceph/ceph:v19, name=focused_perlman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Dec 11 09:20:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-83851f0ac4a7a85512f540a1803c7de035a3946d3bf9e4f6e16df517425aba32-merged.mount: Deactivated successfully.
Dec 11 09:20:14 compute-0 podman[105696]: 2025-12-11 09:20:14.031393914 +0000 UTC m=+0.578417544 container remove ad37ca7aef37036f64fa3d8a9e33acca15fccd88189385ee98e5fb4457a09cae (image=quay.io/ceph/ceph:v19, name=focused_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 11 09:20:14 compute-0 systemd[1]: libpod-conmon-ad37ca7aef37036f64fa3d8a9e33acca15fccd88189385ee98e5fb4457a09cae.scope: Deactivated successfully.
Dec 11 09:20:14 compute-0 sudo[105693]: pam_unix(sudo:session): session closed for user root
Dec 11 09:20:14 compute-0 ceph-mgr[74715]: [prometheus INFO cherrypy.access.139680037156848] ::ffff:192.168.122.100 - - [11/Dec/2025:09:20:14] "GET /metrics HTTP/1.1" 200 48226 "" "Prometheus/2.51.0"
Dec 11 09:20:14 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: ::ffff:192.168.122.100 - - [11/Dec/2025:09:20:14] "GET /metrics HTTP/1.1" 200 48226 "" "Prometheus/2.51.0"
Dec 11 09:20:14 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:14 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef58003630 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:15 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:20:15 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:15 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:20:15.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:15 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v88: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 11 09:20:15 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:20:15 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 11 09:20:15 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:20:15.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 11 09:20:15 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:15 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef70001ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:15 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:15 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef70001ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:16 compute-0 ceph-mon[74426]: pgmap v88: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 11 09:20:16 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:20:16 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:16 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef50003fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:17 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:20:17 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:17 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:20:17.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:17 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v89: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:20:17 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:20:17 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:17 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:20:17.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:17 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:17 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef50003fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:17 compute-0 sudo[105747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 11 09:20:17 compute-0 sudo[105747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:20:17 compute-0 sudo[105747]: pam_unix(sudo:session): session closed for user root
Dec 11 09:20:17 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:17 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef50003fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:18 compute-0 ceph-mon[74426]: pgmap v89: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:20:18 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:18 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef50003fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:19 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:20:19 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:19 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:20:19.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:19 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v90: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 11 09:20:19 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:20:19 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:19 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:20:19.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:19 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:19 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef50003fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:19 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:19 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef50003fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:20 compute-0 ceph-mon[74426]: pgmap v90: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 11 09:20:20 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:20 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef58003630 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:21 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:20:21 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:21 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:20:21.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:21 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v91: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:20:21 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:20:21 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000033s ======
Dec 11 09:20:21 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:20:21.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Dec 11 09:20:21 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:21 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef70001ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:21 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:20:21 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:21 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef70001ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:21 compute-0 ceph-mon[74426]: pgmap v91: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:20:22 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:22 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef50003fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:22 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-haproxy-nfs-cephfs-compute-0-qtoxfz[95274]: [WARNING] 344/092022 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 11 09:20:23 compute-0 ceph-mgr[74715]: [balancer INFO root] Optimize plan auto_2025-12-11_09:20:23
Dec 11 09:20:23 compute-0 ceph-mgr[74715]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 11 09:20:23 compute-0 ceph-mgr[74715]: [balancer INFO root] do_upmap
Dec 11 09:20:23 compute-0 ceph-mgr[74715]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', '.mgr', 'images', 'default.rgw.control', '.nfs', 'default.rgw.meta', 'cephfs.cephfs.meta', 'vms', 'backups', '.rgw.root', 'default.rgw.log']
Dec 11 09:20:23 compute-0 ceph-mgr[74715]: [balancer INFO root] prepared 0/10 upmap changes
Dec 11 09:20:23 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:20:23 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:23 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:20:23.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:23 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v92: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:20:23 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:20:23 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 11 09:20:23 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:20:23.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 11 09:20:23 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:23 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef50003fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] _maybe_adjust
Dec 11 09:20:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 11 09:20:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 11 09:20:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 11 09:20:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 11 09:20:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 11 09:20:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 11 09:20:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 11 09:20:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 11 09:20:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 11 09:20:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 11 09:20:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 11 09:20:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 11 09:20:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 11 09:20:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 11 09:20:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 11 09:20:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 11 09:20:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 11 09:20:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 11 09:20:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 11 09:20:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 11 09:20:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 11 09:20:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 11 09:20:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 11 09:20:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 32)
Dec 11 09:20:23 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] scanning for idle connections..
Dec 11 09:20:23 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] cleaning up connections: []
Dec 11 09:20:23 compute-0 ceph-mgr[74715]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 11 09:20:23 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 11 09:20:23 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 11 09:20:23 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 11 09:20:23 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 11 09:20:23 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] scanning for idle connections..
Dec 11 09:20:23 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] cleaning up connections: []
Dec 11 09:20:23 compute-0 ceph-mgr[74715]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 11 09:20:23 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 11 09:20:23 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 11 09:20:23 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 11 09:20:23 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] scanning for idle connections..
Dec 11 09:20:23 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] cleaning up connections: []
Dec 11 09:20:23 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 11 09:20:23 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 11 09:20:23 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 11 09:20:23 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:23 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef50003fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:23 compute-0 ceph-mon[74426]: pgmap v92: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:20:23 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 11 09:20:24 compute-0 sudo[105801]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otnfupwmepabzpcaiubelbffuvowidga ; /usr/bin/python3'
Dec 11 09:20:24 compute-0 sudo[105801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:20:24 compute-0 python3[105803]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps --daemon-type rgw --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:20:24 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: ::ffff:192.168.122.100 - - [11/Dec/2025:09:20:24] "GET /metrics HTTP/1.1" 200 48228 "" "Prometheus/2.51.0"
Dec 11 09:20:24 compute-0 ceph-mgr[74715]: [prometheus INFO cherrypy.access.139680037156848] ::ffff:192.168.122.100 - - [11/Dec/2025:09:20:24] "GET /metrics HTTP/1.1" 200 48228 "" "Prometheus/2.51.0"
Dec 11 09:20:24 compute-0 podman[105804]: 2025-12-11 09:20:24.450084095 +0000 UTC m=+0.118856333 container create e45e08e6f25cce4845b55fc3a18e2a53d8985d6cd7318e7e0f1d24890a0da179 (image=quay.io/ceph/ceph:v19, name=crazy_lumiere, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:20:24 compute-0 podman[105804]: 2025-12-11 09:20:24.358331633 +0000 UTC m=+0.027103891 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:20:24 compute-0 systemd[1]: Started libpod-conmon-e45e08e6f25cce4845b55fc3a18e2a53d8985d6cd7318e7e0f1d24890a0da179.scope.
Dec 11 09:20:24 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:20:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f816a418e74cf0e3db2b6f75b7ab32f1abf6b7396768accf1a9e05562148c405/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:20:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f816a418e74cf0e3db2b6f75b7ab32f1abf6b7396768accf1a9e05562148c405/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:20:24 compute-0 podman[105804]: 2025-12-11 09:20:24.528838714 +0000 UTC m=+0.197610972 container init e45e08e6f25cce4845b55fc3a18e2a53d8985d6cd7318e7e0f1d24890a0da179 (image=quay.io/ceph/ceph:v19, name=crazy_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 11 09:20:24 compute-0 podman[105804]: 2025-12-11 09:20:24.535746192 +0000 UTC m=+0.204518430 container start e45e08e6f25cce4845b55fc3a18e2a53d8985d6cd7318e7e0f1d24890a0da179 (image=quay.io/ceph/ceph:v19, name=crazy_lumiere, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 11 09:20:24 compute-0 crazy_lumiere[105819]: ERROR: invalid flag --daemon-type
Dec 11 09:20:24 compute-0 systemd[1]: libpod-e45e08e6f25cce4845b55fc3a18e2a53d8985d6cd7318e7e0f1d24890a0da179.scope: Deactivated successfully.
Dec 11 09:20:24 compute-0 podman[105804]: 2025-12-11 09:20:24.743798864 +0000 UTC m=+0.412571112 container attach e45e08e6f25cce4845b55fc3a18e2a53d8985d6cd7318e7e0f1d24890a0da179 (image=quay.io/ceph/ceph:v19, name=crazy_lumiere, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1)
Dec 11 09:20:24 compute-0 podman[105804]: 2025-12-11 09:20:24.744615791 +0000 UTC m=+0.413388029 container died e45e08e6f25cce4845b55fc3a18e2a53d8985d6cd7318e7e0f1d24890a0da179 (image=quay.io/ceph/ceph:v19, name=crazy_lumiere, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Dec 11 09:20:24 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:24 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef58003630 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-f816a418e74cf0e3db2b6f75b7ab32f1abf6b7396768accf1a9e05562148c405-merged.mount: Deactivated successfully.
Dec 11 09:20:24 compute-0 podman[105804]: 2025-12-11 09:20:24.955287395 +0000 UTC m=+0.624059633 container remove e45e08e6f25cce4845b55fc3a18e2a53d8985d6cd7318e7e0f1d24890a0da179 (image=quay.io/ceph/ceph:v19, name=crazy_lumiere, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:20:24 compute-0 sudo[105801]: pam_unix(sudo:session): session closed for user root
Dec 11 09:20:24 compute-0 systemd[1]: libpod-conmon-e45e08e6f25cce4845b55fc3a18e2a53d8985d6cd7318e7e0f1d24890a0da179.scope: Deactivated successfully.
Dec 11 09:20:25 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:20:25 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 11 09:20:25 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:20:25.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 11 09:20:25 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v93: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 11 09:20:25 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:20:25 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:25 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:20:25.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:25 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:25 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef70001ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:25 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:25 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef60001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:26 compute-0 ceph-mon[74426]: pgmap v93: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 11 09:20:26 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:20:26 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:26 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef50003fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:27 compute-0 irqbalance[791]: Cannot change IRQ 26 affinity: Operation not permitted
Dec 11 09:20:27 compute-0 irqbalance[791]: IRQ 26 affinity is now unmanaged
Dec 11 09:20:27 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:20:27 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 11 09:20:27 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:20:27.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 11 09:20:27 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v94: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:20:27 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:20:27 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:27 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:20:27.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:27 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:27 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef58003630 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:27 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:27 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef54001f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:28 compute-0 ceph-mon[74426]: pgmap v94: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:20:28 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-grafana-compute-0[102058]: logger=infra.usagestats t=2025-12-11T09:20:28.780664236Z level=info msg="Usage stats are ready to report"
Dec 11 09:20:28 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:28 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef60001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:29 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:20:29 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:29 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:20:29.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:29 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v95: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Dec 11 09:20:29 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:20:29 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:29 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:20:29.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:29 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:29 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef50003fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:29 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:29 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef58003630 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:30 compute-0 ceph-mon[74426]: pgmap v95: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Dec 11 09:20:30 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:30 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef54002ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:31 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:20:31 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 11 09:20:31 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:20:31.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 11 09:20:31 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v96: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 11 09:20:31 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:20:31 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:31 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:20:31.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:31 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:31 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef60001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:31 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:20:31 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:31 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef50003fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:31 compute-0 ceph-mon[74426]: pgmap v96: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 11 09:20:32 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:32 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 11 09:20:32 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:32 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef58003630 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:33 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:20:33 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:33 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:20:33.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:33 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v97: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 11 09:20:33 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:20:33 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 11 09:20:33 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:20:33.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 11 09:20:33 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:33 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef54002ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:33 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:33 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef60002380 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:34 compute-0 ceph-mon[74426]: pgmap v97: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 11 09:20:34 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: ::ffff:192.168.122.100 - - [11/Dec/2025:09:20:34] "GET /metrics HTTP/1.1" 200 48228 "" "Prometheus/2.51.0"
Dec 11 09:20:34 compute-0 ceph-mgr[74715]: [prometheus INFO cherrypy.access.139680037156848] ::ffff:192.168.122.100 - - [11/Dec/2025:09:20:34] "GET /metrics HTTP/1.1" 200 48228 "" "Prometheus/2.51.0"
Dec 11 09:20:34 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:34 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef50003fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:35 compute-0 sudo[105888]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbgsaxibkzdlobbvmhorgacffyrsqjeb ; /usr/bin/python3'
Dec 11 09:20:35 compute-0 sudo[105888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:20:35 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:20:35 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:35 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:20:35.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:35 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v98: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec 11 09:20:35 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:20:35 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 11 09:20:35 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:20:35.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 11 09:20:35 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:35 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 11 09:20:35 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:35 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 11 09:20:35 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:35 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef58003630 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:35 compute-0 python3[105890]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps --daemon-type rgw --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:20:35 compute-0 podman[105891]: 2025-12-11 09:20:35.332300663 +0000 UTC m=+0.022857676 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:20:35 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:35 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef54002ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:35 compute-0 podman[105891]: 2025-12-11 09:20:35.632161687 +0000 UTC m=+0.322718680 container create f4343916c3d3b0de2b8d7100454dd60833bf79e99888115b21e42e0eb72faa19 (image=quay.io/ceph/ceph:v19, name=elegant_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 11 09:20:35 compute-0 systemd[1]: Started libpod-conmon-f4343916c3d3b0de2b8d7100454dd60833bf79e99888115b21e42e0eb72faa19.scope.
Dec 11 09:20:35 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:20:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70197c87b09a96662d474e1015edcea6a4407e577680c3270acda9e4dfffc7d6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:20:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70197c87b09a96662d474e1015edcea6a4407e577680c3270acda9e4dfffc7d6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:20:35 compute-0 podman[105891]: 2025-12-11 09:20:35.769520906 +0000 UTC m=+0.460077919 container init f4343916c3d3b0de2b8d7100454dd60833bf79e99888115b21e42e0eb72faa19 (image=quay.io/ceph/ceph:v19, name=elegant_elgamal, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec 11 09:20:35 compute-0 podman[105891]: 2025-12-11 09:20:35.778578743 +0000 UTC m=+0.469135736 container start f4343916c3d3b0de2b8d7100454dd60833bf79e99888115b21e42e0eb72faa19 (image=quay.io/ceph/ceph:v19, name=elegant_elgamal, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 11 09:20:35 compute-0 podman[105891]: 2025-12-11 09:20:35.782685074 +0000 UTC m=+0.473242067 container attach f4343916c3d3b0de2b8d7100454dd60833bf79e99888115b21e42e0eb72faa19 (image=quay.io/ceph/ceph:v19, name=elegant_elgamal, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:20:35 compute-0 ceph-mon[74426]: pgmap v98: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec 11 09:20:35 compute-0 elegant_elgamal[105906]: ERROR: invalid flag --daemon-type
Dec 11 09:20:35 compute-0 systemd[1]: libpod-f4343916c3d3b0de2b8d7100454dd60833bf79e99888115b21e42e0eb72faa19.scope: Deactivated successfully.
Dec 11 09:20:35 compute-0 podman[105891]: 2025-12-11 09:20:35.841380767 +0000 UTC m=+0.531937780 container died f4343916c3d3b0de2b8d7100454dd60833bf79e99888115b21e42e0eb72faa19 (image=quay.io/ceph/ceph:v19, name=elegant_elgamal, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:20:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-70197c87b09a96662d474e1015edcea6a4407e577680c3270acda9e4dfffc7d6-merged.mount: Deactivated successfully.
Dec 11 09:20:35 compute-0 podman[105891]: 2025-12-11 09:20:35.897189407 +0000 UTC m=+0.587746400 container remove f4343916c3d3b0de2b8d7100454dd60833bf79e99888115b21e42e0eb72faa19 (image=quay.io/ceph/ceph:v19, name=elegant_elgamal, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Dec 11 09:20:35 compute-0 systemd[1]: libpod-conmon-f4343916c3d3b0de2b8d7100454dd60833bf79e99888115b21e42e0eb72faa19.scope: Deactivated successfully.
Dec 11 09:20:35 compute-0 sudo[105888]: pam_unix(sudo:session): session closed for user root
Dec 11 09:20:36 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:20:36 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:36 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef60002380 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:37 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v99: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Dec 11 09:20:37 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:20:37 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:37 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:20:37.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:37 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:20:37 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:37 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:20:37.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:37 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:37 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef50003fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:37 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:37 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef58003630 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:37 compute-0 sudo[105940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 11 09:20:37 compute-0 sudo[105940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:20:37 compute-0 sudo[105940]: pam_unix(sudo:session): session closed for user root
Dec 11 09:20:37 compute-0 ceph-mon[74426]: pgmap v99: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Dec 11 09:20:38 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 11 09:20:38 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 11 09:20:38 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:38 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 11 09:20:38 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 11 09:20:38 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:38 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef54003bf0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:39 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v100: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 11 09:20:39 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:20:39 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 11 09:20:39 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:20:39.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 11 09:20:39 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:20:39 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:39 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:20:39.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:39 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:39 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef58003630 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:39 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:39 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef60002380 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:40 compute-0 ceph-mon[74426]: pgmap v100: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 11 09:20:40 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:40 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef54003bf0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:41 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v101: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Dec 11 09:20:41 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:20:41 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 11 09:20:41 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:20:41.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 11 09:20:41 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:20:41 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000033s ======
Dec 11 09:20:41 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:20:41.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Dec 11 09:20:41 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:41 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef5000bd00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:41 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:41 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef58003630 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:41 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:20:41 compute-0 ceph-mon[74426]: pgmap v101: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Dec 11 09:20:42 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:42 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef600043b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:43 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v102: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Dec 11 09:20:43 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:20:43 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:43 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:20:43.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:43 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:20:43 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 11 09:20:43 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:20:43.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 11 09:20:43 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:43 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef54003bf0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:43 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:43 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef5000bd00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:44 compute-0 ceph-mon[74426]: pgmap v102: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Dec 11 09:20:44 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: ::ffff:192.168.122.100 - - [11/Dec/2025:09:20:44] "GET /metrics HTTP/1.1" 200 48238 "" "Prometheus/2.51.0"
Dec 11 09:20:44 compute-0 ceph-mgr[74715]: [prometheus INFO cherrypy.access.139680037156848] ::ffff:192.168.122.100 - - [11/Dec/2025:09:20:44] "GET /metrics HTTP/1.1" 200 48238 "" "Prometheus/2.51.0"
Dec 11 09:20:44 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:44 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef58004340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:44 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-haproxy-nfs-cephfs-compute-0-qtoxfz[95274]: [WARNING] 344/092044 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 11 09:20:45 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v103: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 11 09:20:45 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:20:45 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:45 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:20:45.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:45 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:20:45 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:45 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:20:45.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:45 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:45 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef54003bf0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:45 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:45 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef54003bf0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:46 compute-0 sudo[105996]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzofnwghltlglwsroxfqzwxvcxmcrjfy ; /usr/bin/python3'
Dec 11 09:20:46 compute-0 sudo[105996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:20:46 compute-0 python3[105998]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps --daemon-type rgw --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:20:46 compute-0 podman[105999]: 2025-12-11 09:20:46.23739143 +0000 UTC m=+0.051539616 container create f8367cd71899e30a1a21cfa10f47cca4f812ce23cb7c7edc7acab23686b6d477 (image=quay.io/ceph/ceph:v19, name=loving_dhawan, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 11 09:20:46 compute-0 systemd[1]: Started libpod-conmon-f8367cd71899e30a1a21cfa10f47cca4f812ce23cb7c7edc7acab23686b6d477.scope.
Dec 11 09:20:46 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:20:46 compute-0 podman[105999]: 2025-12-11 09:20:46.215226997 +0000 UTC m=+0.029375223 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:20:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dc5be78d5ec340384d1864edb4312c8b9892b7e7971913409dca92df78963d1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:20:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dc5be78d5ec340384d1864edb4312c8b9892b7e7971913409dca92df78963d1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:20:46 compute-0 podman[105999]: 2025-12-11 09:20:46.329148553 +0000 UTC m=+0.143296779 container init f8367cd71899e30a1a21cfa10f47cca4f812ce23cb7c7edc7acab23686b6d477 (image=quay.io/ceph/ceph:v19, name=loving_dhawan, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 11 09:20:46 compute-0 podman[105999]: 2025-12-11 09:20:46.335590326 +0000 UTC m=+0.149738532 container start f8367cd71899e30a1a21cfa10f47cca4f812ce23cb7c7edc7acab23686b6d477 (image=quay.io/ceph/ceph:v19, name=loving_dhawan, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:20:46 compute-0 podman[105999]: 2025-12-11 09:20:46.341578136 +0000 UTC m=+0.155726342 container attach f8367cd71899e30a1a21cfa10f47cca4f812ce23cb7c7edc7acab23686b6d477 (image=quay.io/ceph/ceph:v19, name=loving_dhawan, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 11 09:20:46 compute-0 loving_dhawan[106014]: ERROR: invalid flag --daemon-type
Dec 11 09:20:46 compute-0 systemd[1]: libpod-f8367cd71899e30a1a21cfa10f47cca4f812ce23cb7c7edc7acab23686b6d477.scope: Deactivated successfully.
Dec 11 09:20:46 compute-0 podman[105999]: 2025-12-11 09:20:46.404264116 +0000 UTC m=+0.218412312 container died f8367cd71899e30a1a21cfa10f47cca4f812ce23cb7c7edc7acab23686b6d477 (image=quay.io/ceph/ceph:v19, name=loving_dhawan, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 11 09:20:46 compute-0 ceph-mon[74426]: pgmap v103: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 11 09:20:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-6dc5be78d5ec340384d1864edb4312c8b9892b7e7971913409dca92df78963d1-merged.mount: Deactivated successfully.
Dec 11 09:20:46 compute-0 podman[105999]: 2025-12-11 09:20:46.4614505 +0000 UTC m=+0.275598696 container remove f8367cd71899e30a1a21cfa10f47cca4f812ce23cb7c7edc7acab23686b6d477 (image=quay.io/ceph/ceph:v19, name=loving_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:20:46 compute-0 systemd[1]: libpod-conmon-f8367cd71899e30a1a21cfa10f47cca4f812ce23cb7c7edc7acab23686b6d477.scope: Deactivated successfully.
Dec 11 09:20:46 compute-0 sudo[105996]: pam_unix(sudo:session): session closed for user root
Dec 11 09:20:46 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:20:46 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:46 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef5000bd00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:47 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v104: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Dec 11 09:20:47 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:20:47 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000033s ======
Dec 11 09:20:47 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:20:47.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Dec 11 09:20:47 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:20:47 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:47 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:20:47.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:47 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:47 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef58004340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:47 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:47 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef600043b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:48 compute-0 ceph-mon[74426]: pgmap v104: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Dec 11 09:20:48 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:48 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef54003bf0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:49 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v105: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Dec 11 09:20:49 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:20:49 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:49 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:20:49.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:49 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:20:49 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 11 09:20:49 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:20:49.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 11 09:20:49 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:49 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef5000bd00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:49 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:49 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef58004340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:49 compute-0 ceph-mon[74426]: pgmap v105: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Dec 11 09:20:50 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:50 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef600043b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:51 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v106: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec 11 09:20:51 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:20:51 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 11 09:20:51 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:20:51.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 11 09:20:51 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:20:51 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 11 09:20:51 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:20:51.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 11 09:20:51 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:51 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef600043b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:51 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:51 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef5000bd00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:51 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:20:52 compute-0 ceph-mon[74426]: pgmap v106: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec 11 09:20:52 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:52 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef5000bd00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:53 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v107: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec 11 09:20:53 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:20:53 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 11 09:20:53 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:20:53.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 11 09:20:53 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:20:53 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 11 09:20:53 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:20:53.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 11 09:20:53 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:53 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef5000bd00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:53 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 11 09:20:53 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 11 09:20:53 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] scanning for idle connections..
Dec 11 09:20:53 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] cleaning up connections: []
Dec 11 09:20:53 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] scanning for idle connections..
Dec 11 09:20:53 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] cleaning up connections: []
Dec 11 09:20:53 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] scanning for idle connections..
Dec 11 09:20:53 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] cleaning up connections: []
Dec 11 09:20:53 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:53 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef5000bd00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:54 compute-0 ceph-mon[74426]: pgmap v107: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec 11 09:20:54 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 11 09:20:54 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: ::ffff:192.168.122.100 - - [11/Dec/2025:09:20:54] "GET /metrics HTTP/1.1" 200 48238 "" "Prometheus/2.51.0"
Dec 11 09:20:54 compute-0 ceph-mgr[74715]: [prometheus INFO cherrypy.access.139680037156848] ::ffff:192.168.122.100 - - [11/Dec/2025:09:20:54] "GET /metrics HTTP/1.1" 200 48238 "" "Prometheus/2.51.0"
Dec 11 09:20:54 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:54 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef5000bd00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:55 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v108: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Dec 11 09:20:55 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:20:55 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:55 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:20:55.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:55 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:20:55 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:55 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:20:55.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:55 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef5000bd00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:55 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef5000bd00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:56 compute-0 ceph-mon[74426]: pgmap v108: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Dec 11 09:20:56 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:20:56 compute-0 sudo[106079]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwrtyxjxqpkpjcfwblhytgxtqxaklaqd ; /usr/bin/python3'
Dec 11 09:20:56 compute-0 sudo[106079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:20:56 compute-0 python3[106081]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps --daemon-type rgw --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:20:56 compute-0 podman[106083]: 2025-12-11 09:20:56.88265301 +0000 UTC m=+0.114073761 container create e46c48cb9f9773ab8b718951095636280500f1eb1f657eaab95d8f90cdcf696a (image=quay.io/ceph/ceph:v19, name=boring_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec 11 09:20:56 compute-0 podman[106083]: 2025-12-11 09:20:56.798687605 +0000 UTC m=+0.030108386 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:20:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:56 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef58004340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:56 compute-0 systemd[1]: Started libpod-conmon-e46c48cb9f9773ab8b718951095636280500f1eb1f657eaab95d8f90cdcf696a.scope.
Dec 11 09:20:56 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:20:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba0a4ec24b60ff1beeed9c5cbb96776c090c405a8b27dd2adbb4d989f837dc3c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:20:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba0a4ec24b60ff1beeed9c5cbb96776c090c405a8b27dd2adbb4d989f837dc3c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:20:57 compute-0 podman[106083]: 2025-12-11 09:20:57.028798577 +0000 UTC m=+0.260219348 container init e46c48cb9f9773ab8b718951095636280500f1eb1f657eaab95d8f90cdcf696a (image=quay.io/ceph/ceph:v19, name=boring_jackson, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:20:57 compute-0 podman[106083]: 2025-12-11 09:20:57.036435319 +0000 UTC m=+0.267856070 container start e46c48cb9f9773ab8b718951095636280500f1eb1f657eaab95d8f90cdcf696a (image=quay.io/ceph/ceph:v19, name=boring_jackson, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:20:57 compute-0 podman[106083]: 2025-12-11 09:20:57.040440946 +0000 UTC m=+0.271861727 container attach e46c48cb9f9773ab8b718951095636280500f1eb1f657eaab95d8f90cdcf696a (image=quay.io/ceph/ceph:v19, name=boring_jackson, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 11 09:20:57 compute-0 boring_jackson[106100]: ERROR: invalid flag --daemon-type
Dec 11 09:20:57 compute-0 systemd[1]: libpod-e46c48cb9f9773ab8b718951095636280500f1eb1f657eaab95d8f90cdcf696a.scope: Deactivated successfully.
Dec 11 09:20:57 compute-0 conmon[106100]: conmon e46c48cb9f9773ab8b71 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e46c48cb9f9773ab8b718951095636280500f1eb1f657eaab95d8f90cdcf696a.scope/container/memory.events
Dec 11 09:20:57 compute-0 podman[106083]: 2025-12-11 09:20:57.107547216 +0000 UTC m=+0.338967967 container died e46c48cb9f9773ab8b718951095636280500f1eb1f657eaab95d8f90cdcf696a (image=quay.io/ceph/ceph:v19, name=boring_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec 11 09:20:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba0a4ec24b60ff1beeed9c5cbb96776c090c405a8b27dd2adbb4d989f837dc3c-merged.mount: Deactivated successfully.
Dec 11 09:20:57 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v109: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:20:57 compute-0 podman[106083]: 2025-12-11 09:20:57.154062381 +0000 UTC m=+0.385483132 container remove e46c48cb9f9773ab8b718951095636280500f1eb1f657eaab95d8f90cdcf696a (image=quay.io/ceph/ceph:v19, name=boring_jackson, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:20:57 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:20:57 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 11 09:20:57 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:20:57.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 11 09:20:57 compute-0 systemd[1]: libpod-conmon-e46c48cb9f9773ab8b718951095636280500f1eb1f657eaab95d8f90cdcf696a.scope: Deactivated successfully.
Dec 11 09:20:57 compute-0 sudo[106079]: pam_unix(sudo:session): session closed for user root
Dec 11 09:20:57 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:20:57 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:57 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:20:57.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:57 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef54003bf0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:57 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef4c0019e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:57 compute-0 sudo[106132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 11 09:20:57 compute-0 sudo[106132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:20:57 compute-0 sudo[106132]: pam_unix(sudo:session): session closed for user root
Dec 11 09:20:58 compute-0 ceph-mon[74426]: pgmap v109: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:20:58 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:58 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef5000bd00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:59 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v110: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 11 09:20:59 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:20:59 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:59 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:20:59.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:59 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:20:59 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:59 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:20:59.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:59 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:59 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef600043b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:59 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:20:59 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef54003bf0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:00 compute-0 ceph-mon[74426]: pgmap v110: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 11 09:21:00 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:21:00 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef4c0019e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:01 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v111: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:21:01 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:21:01 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:01 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:21:01.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:01 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:21:01 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:01 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:21:01.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:01 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:21:01 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef5000bd00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:01 compute-0 kernel: ganesha.nfsd[105852]: segfault at 50 ip 00007feffd0c532e sp 00007fef75ffa210 error 4 in libntirpc.so.5.8[7feffd0aa000+2c000] likely on CPU 5 (core 0, socket 5)
Dec 11 09:21:01 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec 11 09:21:01 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[94782]: 11/12/2025 09:21:01 : epoch 693a8bfe : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef600043b0 fd 48 proxy ignored for local
Dec 11 09:21:01 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:21:01 compute-0 systemd[1]: Created slice Slice /system/systemd-coredump.
Dec 11 09:21:01 compute-0 systemd[1]: Started Process Core Dump (PID 106161/UID 0).
Dec 11 09:21:02 compute-0 ceph-mon[74426]: pgmap v111: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:21:02 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-haproxy-nfs-cephfs-compute-0-qtoxfz[95274]: [WARNING] 344/092102 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 16ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 11 09:21:02 compute-0 systemd-coredump[106162]: Process 94786 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 70:
                                                    #0  0x00007feffd0c532e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Dec 11 09:21:03 compute-0 systemd[1]: systemd-coredump@0-106161-0.service: Deactivated successfully.
Dec 11 09:21:03 compute-0 systemd[1]: systemd-coredump@0-106161-0.service: Consumed 1.425s CPU time.
Dec 11 09:21:03 compute-0 podman[106169]: 2025-12-11 09:21:03.096983731 +0000 UTC m=+0.024881420 container died b054262215ee01c3adca2deaedf76efed99f7a0a45370b1e29ea68ac174c96fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 11 09:21:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-83fb39222185cbf52b921995d3fb5bac1aede6e75c34ab89f21cb7e956cf72ee-merged.mount: Deactivated successfully.
Dec 11 09:21:03 compute-0 systemd[90513]: Created slice User Background Tasks Slice.
Dec 11 09:21:03 compute-0 podman[106169]: 2025-12-11 09:21:03.136511645 +0000 UTC m=+0.064409304 container remove b054262215ee01c3adca2deaedf76efed99f7a0a45370b1e29ea68ac174c96fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec 11 09:21:03 compute-0 systemd[90513]: Starting Cleanup of User's Temporary Files and Directories...
Dec 11 09:21:03 compute-0 systemd[1]: ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060@nfs.cephfs.2.0.compute-0.iryjby.service: Main process exited, code=exited, status=139/n/a
Dec 11 09:21:03 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v112: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:21:03 compute-0 systemd[90513]: Finished Cleanup of User's Temporary Files and Directories.
Dec 11 09:21:03 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:21:03 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 11 09:21:03 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:21:03.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 11 09:21:03 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:21:03 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:03 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:21:03.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:03 compute-0 systemd[1]: ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060@nfs.cephfs.2.0.compute-0.iryjby.service: Failed with result 'exit-code'.
Dec 11 09:21:03 compute-0 systemd[1]: ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060@nfs.cephfs.2.0.compute-0.iryjby.service: Consumed 2.419s CPU time.
Dec 11 09:21:04 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: ::ffff:192.168.122.100 - - [11/Dec/2025:09:21:04] "GET /metrics HTTP/1.1" 200 48238 "" "Prometheus/2.51.0"
Dec 11 09:21:04 compute-0 ceph-mgr[74715]: [prometheus INFO cherrypy.access.139680037156848] ::ffff:192.168.122.100 - - [11/Dec/2025:09:21:04] "GET /metrics HTTP/1.1" 200 48238 "" "Prometheus/2.51.0"
Dec 11 09:21:04 compute-0 ceph-mon[74426]: pgmap v112: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:21:05 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v113: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:21:05 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:21:05 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:05 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:21:05.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:05 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:21:05 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:05 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:21:05.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:05 compute-0 ceph-mon[74426]: pgmap v113: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:21:06 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:21:07 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v114: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 11 09:21:07 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:21:07 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:07 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:21:07.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:07 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:21:07 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:07 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:21:07.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:07 compute-0 sudo[106240]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yycwknafqiayooebvbaomsyazrmcfzug ; /usr/bin/python3'
Dec 11 09:21:07 compute-0 sudo[106240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:21:07 compute-0 python3[106242]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps --daemon-type rgw --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:21:07 compute-0 podman[106243]: 2025-12-11 09:21:07.510797655 +0000 UTC m=+0.047760987 container create 05bc7b7f72f146b0b939036bad1fc1dae8cdcd36ffe6a10ecf7436d462367ca6 (image=quay.io/ceph/ceph:v19, name=determined_kilby, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:21:07 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-haproxy-nfs-cephfs-compute-0-qtoxfz[95274]: [WARNING] 344/092107 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 11 09:21:07 compute-0 systemd[1]: Started libpod-conmon-05bc7b7f72f146b0b939036bad1fc1dae8cdcd36ffe6a10ecf7436d462367ca6.scope.
Dec 11 09:21:07 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:21:07 compute-0 podman[106243]: 2025-12-11 09:21:07.491033468 +0000 UTC m=+0.027996830 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:21:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9214b1cc0b94dd87fd79e99c12d5bd8f3823fade76224dd1d20133abe0dc8d81/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:21:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9214b1cc0b94dd87fd79e99c12d5bd8f3823fade76224dd1d20133abe0dc8d81/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:21:07 compute-0 podman[106243]: 2025-12-11 09:21:07.607251335 +0000 UTC m=+0.144214687 container init 05bc7b7f72f146b0b939036bad1fc1dae8cdcd36ffe6a10ecf7436d462367ca6 (image=quay.io/ceph/ceph:v19, name=determined_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:21:07 compute-0 podman[106243]: 2025-12-11 09:21:07.614875437 +0000 UTC m=+0.151838759 container start 05bc7b7f72f146b0b939036bad1fc1dae8cdcd36ffe6a10ecf7436d462367ca6 (image=quay.io/ceph/ceph:v19, name=determined_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Dec 11 09:21:07 compute-0 podman[106243]: 2025-12-11 09:21:07.61936661 +0000 UTC m=+0.156329962 container attach 05bc7b7f72f146b0b939036bad1fc1dae8cdcd36ffe6a10ecf7436d462367ca6 (image=quay.io/ceph/ceph:v19, name=determined_kilby, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Dec 11 09:21:07 compute-0 determined_kilby[106258]: ERROR: invalid flag --daemon-type
Dec 11 09:21:07 compute-0 systemd[1]: libpod-05bc7b7f72f146b0b939036bad1fc1dae8cdcd36ffe6a10ecf7436d462367ca6.scope: Deactivated successfully.
Dec 11 09:21:07 compute-0 conmon[106258]: conmon 05bc7b7f72f146b0b939 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-05bc7b7f72f146b0b939036bad1fc1dae8cdcd36ffe6a10ecf7436d462367ca6.scope/container/memory.events
Dec 11 09:21:07 compute-0 podman[106243]: 2025-12-11 09:21:07.671850345 +0000 UTC m=+0.208813667 container died 05bc7b7f72f146b0b939036bad1fc1dae8cdcd36ffe6a10ecf7436d462367ca6 (image=quay.io/ceph/ceph:v19, name=determined_kilby, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Dec 11 09:21:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-9214b1cc0b94dd87fd79e99c12d5bd8f3823fade76224dd1d20133abe0dc8d81-merged.mount: Deactivated successfully.
Dec 11 09:21:07 compute-0 podman[106243]: 2025-12-11 09:21:07.713451555 +0000 UTC m=+0.250414887 container remove 05bc7b7f72f146b0b939036bad1fc1dae8cdcd36ffe6a10ecf7436d462367ca6 (image=quay.io/ceph/ceph:v19, name=determined_kilby, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec 11 09:21:07 compute-0 systemd[1]: libpod-conmon-05bc7b7f72f146b0b939036bad1fc1dae8cdcd36ffe6a10ecf7436d462367ca6.scope: Deactivated successfully.
Dec 11 09:21:07 compute-0 sudo[106240]: pam_unix(sudo:session): session closed for user root
Dec 11 09:21:08 compute-0 ceph-mon[74426]: pgmap v114: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 11 09:21:08 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 11 09:21:08 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 11 09:21:08 compute-0 sudo[106291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:21:08 compute-0 sudo[106291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:21:08 compute-0 sudo[106291]: pam_unix(sudo:session): session closed for user root
Dec 11 09:21:08 compute-0 sudo[106316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 11 09:21:08 compute-0 sudo[106316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:21:09 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v115: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 11 09:21:09 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:21:09 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 11 09:21:09 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:21:09.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 11 09:21:09 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 11 09:21:09 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:21:09 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:09 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:21:09.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:09 compute-0 sudo[106316]: pam_unix(sudo:session): session closed for user root
Dec 11 09:21:09 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 11 09:21:09 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:21:09 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 11 09:21:09 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 11 09:21:09 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 11 09:21:09 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:21:09 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 11 09:21:09 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:21:09 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 11 09:21:09 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 11 09:21:09 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 11 09:21:09 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 11 09:21:09 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 11 09:21:09 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:21:09 compute-0 sudo[106373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:21:09 compute-0 sudo[106373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:21:09 compute-0 sudo[106373]: pam_unix(sudo:session): session closed for user root
Dec 11 09:21:09 compute-0 sudo[106398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 11 09:21:09 compute-0 sudo[106398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:21:10 compute-0 podman[106462]: 2025-12-11 09:21:10.040208654 +0000 UTC m=+0.026791700 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:21:10 compute-0 podman[106462]: 2025-12-11 09:21:10.174739674 +0000 UTC m=+0.161322700 container create 7fc0985a2a4499b16480803385071cb5abedf9801c95cd0a6eadd7e4ecb231d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_allen, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 11 09:21:10 compute-0 systemd[1]: Started libpod-conmon-7fc0985a2a4499b16480803385071cb5abedf9801c95cd0a6eadd7e4ecb231d9.scope.
Dec 11 09:21:10 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:21:10 compute-0 ceph-mon[74426]: pgmap v115: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 11 09:21:10 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:21:10 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 11 09:21:10 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:21:10 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:21:10 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 11 09:21:10 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 11 09:21:10 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:21:10 compute-0 podman[106462]: 2025-12-11 09:21:10.321654435 +0000 UTC m=+0.308237491 container init 7fc0985a2a4499b16480803385071cb5abedf9801c95cd0a6eadd7e4ecb231d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_allen, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:21:10 compute-0 podman[106462]: 2025-12-11 09:21:10.328170487 +0000 UTC m=+0.314753513 container start 7fc0985a2a4499b16480803385071cb5abedf9801c95cd0a6eadd7e4ecb231d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_allen, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:21:10 compute-0 podman[106462]: 2025-12-11 09:21:10.333148331 +0000 UTC m=+0.319731357 container attach 7fc0985a2a4499b16480803385071cb5abedf9801c95cd0a6eadd7e4ecb231d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_allen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:21:10 compute-0 fervent_allen[106479]: 167 167
Dec 11 09:21:10 compute-0 systemd[1]: libpod-7fc0985a2a4499b16480803385071cb5abedf9801c95cd0a6eadd7e4ecb231d9.scope: Deactivated successfully.
Dec 11 09:21:10 compute-0 podman[106462]: 2025-12-11 09:21:10.3346985 +0000 UTC m=+0.321281526 container died 7fc0985a2a4499b16480803385071cb5abedf9801c95cd0a6eadd7e4ecb231d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_allen, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Dec 11 09:21:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-591d9e166ae7cffb8b38cc02fbb66181933c271434f92797866ffd2509d97ee4-merged.mount: Deactivated successfully.
Dec 11 09:21:10 compute-0 podman[106462]: 2025-12-11 09:21:10.381172896 +0000 UTC m=+0.367755922 container remove 7fc0985a2a4499b16480803385071cb5abedf9801c95cd0a6eadd7e4ecb231d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_allen, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:21:10 compute-0 systemd[1]: libpod-conmon-7fc0985a2a4499b16480803385071cb5abedf9801c95cd0a6eadd7e4ecb231d9.scope: Deactivated successfully.
Dec 11 09:21:10 compute-0 podman[106500]: 2025-12-11 09:21:10.614687914 +0000 UTC m=+0.111434399 container create 78eee66cef1d3fe4df9b8118c766882896db19ff6ae3e6cf296a9013c2cb6a24 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 11 09:21:10 compute-0 podman[106500]: 2025-12-11 09:21:10.532357391 +0000 UTC m=+0.029103906 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:21:10 compute-0 systemd[1]: Started libpod-conmon-78eee66cef1d3fe4df9b8118c766882896db19ff6ae3e6cf296a9013c2cb6a24.scope.
Dec 11 09:21:10 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:21:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6553440a6e2dde8055b5b4e03b50e96dba1a14c875c23cbc30296ae7bc8342d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 11 09:21:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6553440a6e2dde8055b5b4e03b50e96dba1a14c875c23cbc30296ae7bc8342d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:21:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6553440a6e2dde8055b5b4e03b50e96dba1a14c875c23cbc30296ae7bc8342d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:21:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6553440a6e2dde8055b5b4e03b50e96dba1a14c875c23cbc30296ae7bc8342d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 11 09:21:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6553440a6e2dde8055b5b4e03b50e96dba1a14c875c23cbc30296ae7bc8342d7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 11 09:21:10 compute-0 podman[106500]: 2025-12-11 09:21:10.747041713 +0000 UTC m=+0.243788228 container init 78eee66cef1d3fe4df9b8118c766882896db19ff6ae3e6cf296a9013c2cb6a24 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec 11 09:21:10 compute-0 podman[106500]: 2025-12-11 09:21:10.756650991 +0000 UTC m=+0.253397476 container start 78eee66cef1d3fe4df9b8118c766882896db19ff6ae3e6cf296a9013c2cb6a24 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 11 09:21:10 compute-0 podman[106500]: 2025-12-11 09:21:10.760699768 +0000 UTC m=+0.257446283 container attach 78eee66cef1d3fe4df9b8118c766882896db19ff6ae3e6cf296a9013c2cb6a24 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec 11 09:21:11 compute-0 fervent_swartz[106517]: --> passed data devices: 0 physical, 1 LVM
Dec 11 09:21:11 compute-0 fervent_swartz[106517]: --> All data devices are unavailable
Dec 11 09:21:11 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v116: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Dec 11 09:21:11 compute-0 systemd[1]: libpod-78eee66cef1d3fe4df9b8118c766882896db19ff6ae3e6cf296a9013c2cb6a24.scope: Deactivated successfully.
Dec 11 09:21:11 compute-0 podman[106500]: 2025-12-11 09:21:11.169584502 +0000 UTC m=+0.666330997 container died 78eee66cef1d3fe4df9b8118c766882896db19ff6ae3e6cf296a9013c2cb6a24 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_swartz, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:21:11 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:21:11 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000031s ======
Dec 11 09:21:11 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:21:11.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 11 09:21:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-6553440a6e2dde8055b5b4e03b50e96dba1a14c875c23cbc30296ae7bc8342d7-merged.mount: Deactivated successfully.
Dec 11 09:21:11 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:21:11 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:11 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:21:11.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:11 compute-0 podman[106500]: 2025-12-11 09:21:11.267655444 +0000 UTC m=+0.764401929 container remove 78eee66cef1d3fe4df9b8118c766882896db19ff6ae3e6cf296a9013c2cb6a24 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_swartz, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:21:11 compute-0 systemd[1]: libpod-conmon-78eee66cef1d3fe4df9b8118c766882896db19ff6ae3e6cf296a9013c2cb6a24.scope: Deactivated successfully.
Dec 11 09:21:11 compute-0 sudo[106398]: pam_unix(sudo:session): session closed for user root
Dec 11 09:21:11 compute-0 sudo[106545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:21:11 compute-0 sudo[106545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:21:11 compute-0 sudo[106545]: pam_unix(sudo:session): session closed for user root
Dec 11 09:21:11 compute-0 sudo[106570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -- lvm list --format json
Dec 11 09:21:11 compute-0 sudo[106570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:21:11 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:21:11 compute-0 podman[106635]: 2025-12-11 09:21:11.881556568 +0000 UTC m=+0.065822419 container create a2c55a52e4d93942681db6c95e0781b84cc652786ddd26fc9ce07e8ea1b672c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_tesla, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec 11 09:21:11 compute-0 systemd[1]: Started libpod-conmon-a2c55a52e4d93942681db6c95e0781b84cc652786ddd26fc9ce07e8ea1b672c7.scope.
Dec 11 09:21:11 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:21:11 compute-0 podman[106635]: 2025-12-11 09:21:11.848677395 +0000 UTC m=+0.032943276 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:21:11 compute-0 podman[106635]: 2025-12-11 09:21:11.957200042 +0000 UTC m=+0.141465913 container init a2c55a52e4d93942681db6c95e0781b84cc652786ddd26fc9ce07e8ea1b672c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_tesla, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec 11 09:21:11 compute-0 podman[106635]: 2025-12-11 09:21:11.964480669 +0000 UTC m=+0.148746520 container start a2c55a52e4d93942681db6c95e0781b84cc652786ddd26fc9ce07e8ea1b672c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_tesla, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 11 09:21:11 compute-0 podman[106635]: 2025-12-11 09:21:11.968942378 +0000 UTC m=+0.153208249 container attach a2c55a52e4d93942681db6c95e0781b84cc652786ddd26fc9ce07e8ea1b672c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_tesla, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Dec 11 09:21:11 compute-0 cool_tesla[106652]: 167 167
Dec 11 09:21:11 compute-0 systemd[1]: libpod-a2c55a52e4d93942681db6c95e0781b84cc652786ddd26fc9ce07e8ea1b672c7.scope: Deactivated successfully.
Dec 11 09:21:11 compute-0 conmon[106652]: conmon a2c55a52e4d93942681d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a2c55a52e4d93942681db6c95e0781b84cc652786ddd26fc9ce07e8ea1b672c7.scope/container/memory.events
Dec 11 09:21:12 compute-0 podman[106657]: 2025-12-11 09:21:12.025995123 +0000 UTC m=+0.034087042 container died a2c55a52e4d93942681db6c95e0781b84cc652786ddd26fc9ce07e8ea1b672c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_tesla, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec 11 09:21:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-d593eaa7e6e1d539511d2689b57c2eeb4cd3c767dd00709f723479dd8fe1d624-merged.mount: Deactivated successfully.
Dec 11 09:21:12 compute-0 podman[106657]: 2025-12-11 09:21:12.191202665 +0000 UTC m=+0.199294554 container remove a2c55a52e4d93942681db6c95e0781b84cc652786ddd26fc9ce07e8ea1b672c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_tesla, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Dec 11 09:21:12 compute-0 systemd[1]: libpod-conmon-a2c55a52e4d93942681db6c95e0781b84cc652786ddd26fc9ce07e8ea1b672c7.scope: Deactivated successfully.
Dec 11 09:21:12 compute-0 ceph-mon[74426]: pgmap v116: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Dec 11 09:21:12 compute-0 podman[106680]: 2025-12-11 09:21:12.368583734 +0000 UTC m=+0.050089209 container create 0ff9a9defad26a0ca8fe6e1e5b951a86454bd90c4d9c0e177e440b35f33bc27d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_morse, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:21:12 compute-0 systemd[1]: Started libpod-conmon-0ff9a9defad26a0ca8fe6e1e5b951a86454bd90c4d9c0e177e440b35f33bc27d.scope.
Dec 11 09:21:12 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:21:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2442e0764e5018ec111362b49748bdd44d364aa6e17cb91f67460475ca7c924/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 11 09:21:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2442e0764e5018ec111362b49748bdd44d364aa6e17cb91f67460475ca7c924/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:21:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2442e0764e5018ec111362b49748bdd44d364aa6e17cb91f67460475ca7c924/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:21:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2442e0764e5018ec111362b49748bdd44d364aa6e17cb91f67460475ca7c924/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 11 09:21:12 compute-0 podman[106680]: 2025-12-11 09:21:12.349081167 +0000 UTC m=+0.030586662 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:21:12 compute-0 podman[106680]: 2025-12-11 09:21:12.489044933 +0000 UTC m=+0.170550428 container init 0ff9a9defad26a0ca8fe6e1e5b951a86454bd90c4d9c0e177e440b35f33bc27d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_morse, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec 11 09:21:12 compute-0 podman[106680]: 2025-12-11 09:21:12.496432334 +0000 UTC m=+0.177937809 container start 0ff9a9defad26a0ca8fe6e1e5b951a86454bd90c4d9c0e177e440b35f33bc27d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec 11 09:21:12 compute-0 podman[106680]: 2025-12-11 09:21:12.53135249 +0000 UTC m=+0.212857965 container attach 0ff9a9defad26a0ca8fe6e1e5b951a86454bd90c4d9c0e177e440b35f33bc27d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_morse, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec 11 09:21:12 compute-0 flamboyant_morse[106696]: {
Dec 11 09:21:12 compute-0 flamboyant_morse[106696]:     "1": [
Dec 11 09:21:12 compute-0 flamboyant_morse[106696]:         {
Dec 11 09:21:12 compute-0 flamboyant_morse[106696]:             "devices": [
Dec 11 09:21:12 compute-0 flamboyant_morse[106696]:                 "/dev/loop3"
Dec 11 09:21:12 compute-0 flamboyant_morse[106696]:             ],
Dec 11 09:21:12 compute-0 flamboyant_morse[106696]:             "lv_name": "ceph_lv0",
Dec 11 09:21:12 compute-0 flamboyant_morse[106696]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 11 09:21:12 compute-0 flamboyant_morse[106696]:             "lv_size": "21470642176",
Dec 11 09:21:12 compute-0 flamboyant_morse[106696]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TIjPv6-JN4c-U1Hf-F3k1-2mgj-CwGi-pbjIHh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=865a308b-cdc3-4034-b5eb-feb596b462bf,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 11 09:21:12 compute-0 flamboyant_morse[106696]:             "lv_uuid": "TIjPv6-JN4c-U1Hf-F3k1-2mgj-CwGi-pbjIHh",
Dec 11 09:21:12 compute-0 flamboyant_morse[106696]:             "name": "ceph_lv0",
Dec 11 09:21:12 compute-0 flamboyant_morse[106696]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 11 09:21:12 compute-0 flamboyant_morse[106696]:             "tags": {
Dec 11 09:21:12 compute-0 flamboyant_morse[106696]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 11 09:21:12 compute-0 flamboyant_morse[106696]:                 "ceph.block_uuid": "TIjPv6-JN4c-U1Hf-F3k1-2mgj-CwGi-pbjIHh",
Dec 11 09:21:12 compute-0 flamboyant_morse[106696]:                 "ceph.cephx_lockbox_secret": "",
Dec 11 09:21:12 compute-0 flamboyant_morse[106696]:                 "ceph.cluster_fsid": "31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060",
Dec 11 09:21:12 compute-0 flamboyant_morse[106696]:                 "ceph.cluster_name": "ceph",
Dec 11 09:21:12 compute-0 flamboyant_morse[106696]:                 "ceph.crush_device_class": "",
Dec 11 09:21:12 compute-0 flamboyant_morse[106696]:                 "ceph.encrypted": "0",
Dec 11 09:21:12 compute-0 flamboyant_morse[106696]:                 "ceph.osd_fsid": "865a308b-cdc3-4034-b5eb-feb596b462bf",
Dec 11 09:21:12 compute-0 flamboyant_morse[106696]:                 "ceph.osd_id": "1",
Dec 11 09:21:12 compute-0 flamboyant_morse[106696]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 11 09:21:12 compute-0 flamboyant_morse[106696]:                 "ceph.type": "block",
Dec 11 09:21:12 compute-0 flamboyant_morse[106696]:                 "ceph.vdo": "0",
Dec 11 09:21:12 compute-0 flamboyant_morse[106696]:                 "ceph.with_tpm": "0"
Dec 11 09:21:12 compute-0 flamboyant_morse[106696]:             },
Dec 11 09:21:12 compute-0 flamboyant_morse[106696]:             "type": "block",
Dec 11 09:21:12 compute-0 flamboyant_morse[106696]:             "vg_name": "ceph_vg0"
Dec 11 09:21:12 compute-0 flamboyant_morse[106696]:         }
Dec 11 09:21:12 compute-0 flamboyant_morse[106696]:     ]
Dec 11 09:21:12 compute-0 flamboyant_morse[106696]: }
Dec 11 09:21:12 compute-0 systemd[1]: libpod-0ff9a9defad26a0ca8fe6e1e5b951a86454bd90c4d9c0e177e440b35f33bc27d.scope: Deactivated successfully.
Dec 11 09:21:12 compute-0 podman[106680]: 2025-12-11 09:21:12.817573447 +0000 UTC m=+0.499078932 container died 0ff9a9defad26a0ca8fe6e1e5b951a86454bd90c4d9c0e177e440b35f33bc27d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_morse, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:21:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-e2442e0764e5018ec111362b49748bdd44d364aa6e17cb91f67460475ca7c924-merged.mount: Deactivated successfully.
Dec 11 09:21:13 compute-0 podman[106680]: 2025-12-11 09:21:13.00916586 +0000 UTC m=+0.690671335 container remove 0ff9a9defad26a0ca8fe6e1e5b951a86454bd90c4d9c0e177e440b35f33bc27d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_morse, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:21:13 compute-0 systemd[1]: libpod-conmon-0ff9a9defad26a0ca8fe6e1e5b951a86454bd90c4d9c0e177e440b35f33bc27d.scope: Deactivated successfully.
Dec 11 09:21:13 compute-0 sudo[106570]: pam_unix(sudo:session): session closed for user root
Dec 11 09:21:13 compute-0 sudo[106721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:21:13 compute-0 sudo[106721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:21:13 compute-0 sudo[106721]: pam_unix(sudo:session): session closed for user root
Dec 11 09:21:13 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v117: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Dec 11 09:21:13 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:21:13 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:13 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:21:13.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:13 compute-0 sudo[106746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -- raw list --format json
Dec 11 09:21:13 compute-0 sudo[106746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:21:13 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:21:13 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:13 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:21:13.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:13 compute-0 systemd[1]: ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060@nfs.cephfs.2.0.compute-0.iryjby.service: Scheduled restart job, restart counter is at 1.
Dec 11 09:21:13 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.iryjby for 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060.
Dec 11 09:21:13 compute-0 systemd[1]: ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060@nfs.cephfs.2.0.compute-0.iryjby.service: Consumed 2.419s CPU time.
Dec 11 09:21:13 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.iryjby for 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060...
Dec 11 09:21:13 compute-0 podman[106822]: 2025-12-11 09:21:13.661779898 +0000 UTC m=+0.120669575 container create d684dc824e88d4dee38c46c3af2f4f18dccd226c2a2fe9045ff7d9f2f8ff08b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid)
Dec 11 09:21:13 compute-0 podman[106822]: 2025-12-11 09:21:13.571861131 +0000 UTC m=+0.030750848 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:21:13 compute-0 systemd[1]: Started libpod-conmon-d684dc824e88d4dee38c46c3af2f4f18dccd226c2a2fe9045ff7d9f2f8ff08b0.scope.
Dec 11 09:21:13 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:21:13 compute-0 podman[106822]: 2025-12-11 09:21:13.808918428 +0000 UTC m=+0.267808145 container init d684dc824e88d4dee38c46c3af2f4f18dccd226c2a2fe9045ff7d9f2f8ff08b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_cori, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 11 09:21:13 compute-0 podman[106822]: 2025-12-11 09:21:13.817540846 +0000 UTC m=+0.276430523 container start d684dc824e88d4dee38c46c3af2f4f18dccd226c2a2fe9045ff7d9f2f8ff08b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_cori, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:21:13 compute-0 podman[106822]: 2025-12-11 09:21:13.821183059 +0000 UTC m=+0.280072756 container attach d684dc824e88d4dee38c46c3af2f4f18dccd226c2a2fe9045ff7d9f2f8ff08b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_cori, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:21:13 compute-0 focused_cori[106851]: 167 167
Dec 11 09:21:13 compute-0 systemd[1]: libpod-d684dc824e88d4dee38c46c3af2f4f18dccd226c2a2fe9045ff7d9f2f8ff08b0.scope: Deactivated successfully.
Dec 11 09:21:13 compute-0 podman[106822]: 2025-12-11 09:21:13.824537533 +0000 UTC m=+0.283427230 container died d684dc824e88d4dee38c46c3af2f4f18dccd226c2a2fe9045ff7d9f2f8ff08b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_cori, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:21:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-08ac57fce7e2ea13f08ee4033e29b5878dd66d6957280f7bc5e8ca9d3f4db684-merged.mount: Deactivated successfully.
Dec 11 09:21:13 compute-0 podman[106822]: 2025-12-11 09:21:13.918367714 +0000 UTC m=+0.377257401 container remove d684dc824e88d4dee38c46c3af2f4f18dccd226c2a2fe9045ff7d9f2f8ff08b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_cori, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 11 09:21:13 compute-0 systemd[1]: libpod-conmon-d684dc824e88d4dee38c46c3af2f4f18dccd226c2a2fe9045ff7d9f2f8ff08b0.scope: Deactivated successfully.
Dec 11 09:21:14 compute-0 podman[106889]: 2025-12-11 09:21:14.013745382 +0000 UTC m=+0.057856062 container create 18ad14b9cd928a45ce4e864db1fb0529242cd12e88a89b171140393a4d5cd3a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Dec 11 09:21:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/568004864dcaa8ecdf83d46733e0df5cf01151fc9ec5c27bf3177ab929ab045c/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec 11 09:21:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/568004864dcaa8ecdf83d46733e0df5cf01151fc9ec5c27bf3177ab929ab045c/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec 11 09:21:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/568004864dcaa8ecdf83d46733e0df5cf01151fc9ec5c27bf3177ab929ab045c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:21:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/568004864dcaa8ecdf83d46733e0df5cf01151fc9ec5c27bf3177ab929ab045c/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.iryjby-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec 11 09:21:14 compute-0 podman[106889]: 2025-12-11 09:21:13.982977625 +0000 UTC m=+0.027088335 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:21:14 compute-0 podman[106889]: 2025-12-11 09:21:14.080240901 +0000 UTC m=+0.124351601 container init 18ad14b9cd928a45ce4e864db1fb0529242cd12e88a89b171140393a4d5cd3a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec 11 09:21:14 compute-0 podman[106889]: 2025-12-11 09:21:14.086029301 +0000 UTC m=+0.130139991 container start 18ad14b9cd928a45ce4e864db1fb0529242cd12e88a89b171140393a4d5cd3a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:21:14 compute-0 bash[106889]: 18ad14b9cd928a45ce4e864db1fb0529242cd12e88a89b171140393a4d5cd3a8
Dec 11 09:21:14 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[106915]: 11/12/2025 09:21:14 : epoch 693a8d0a : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec 11 09:21:14 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[106915]: 11/12/2025 09:21:14 : epoch 693a8d0a : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec 11 09:21:14 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.iryjby for 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060.
Dec 11 09:21:14 compute-0 podman[106906]: 2025-12-11 09:21:14.102852735 +0000 UTC m=+0.060667249 container create 7739e898a9a3824c7aad08cea7e1565afa3be5bda402db729178657ff79bb36d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_ride, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:21:14 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[106915]: 11/12/2025 09:21:14 : epoch 693a8d0a : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec 11 09:21:14 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[106915]: 11/12/2025 09:21:14 : epoch 693a8d0a : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec 11 09:21:14 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[106915]: 11/12/2025 09:21:14 : epoch 693a8d0a : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec 11 09:21:14 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[106915]: 11/12/2025 09:21:14 : epoch 693a8d0a : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec 11 09:21:14 compute-0 systemd[1]: Started libpod-conmon-7739e898a9a3824c7aad08cea7e1565afa3be5bda402db729178657ff79bb36d.scope.
Dec 11 09:21:14 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[106915]: 11/12/2025 09:21:14 : epoch 693a8d0a : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec 11 09:21:14 compute-0 podman[106906]: 2025-12-11 09:21:14.068816556 +0000 UTC m=+0.026631100 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:21:14 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:21:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02f121497d0e45ffdc5969b068729f767a96c39a869bf9019858f6b8673a6422/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 11 09:21:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02f121497d0e45ffdc5969b068729f767a96c39a869bf9019858f6b8673a6422/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:21:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02f121497d0e45ffdc5969b068729f767a96c39a869bf9019858f6b8673a6422/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:21:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02f121497d0e45ffdc5969b068729f767a96c39a869bf9019858f6b8673a6422/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 11 09:21:14 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[106915]: 11/12/2025 09:21:14 : epoch 693a8d0a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 11 09:21:14 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: ::ffff:192.168.122.100 - - [11/Dec/2025:09:21:14] "GET /metrics HTTP/1.1" 200 48236 "" "Prometheus/2.51.0"
Dec 11 09:21:14 compute-0 ceph-mgr[74715]: [prometheus INFO cherrypy.access.139680037156848] ::ffff:192.168.122.100 - - [11/Dec/2025:09:21:14] "GET /metrics HTTP/1.1" 200 48236 "" "Prometheus/2.51.0"
Dec 11 09:21:14 compute-0 podman[106906]: 2025-12-11 09:21:14.502599636 +0000 UTC m=+0.460414170 container init 7739e898a9a3824c7aad08cea7e1565afa3be5bda402db729178657ff79bb36d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 11 09:21:14 compute-0 ceph-mon[74426]: pgmap v117: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Dec 11 09:21:14 compute-0 podman[106906]: 2025-12-11 09:21:14.511814873 +0000 UTC m=+0.469629387 container start 7739e898a9a3824c7aad08cea7e1565afa3be5bda402db729178657ff79bb36d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 11 09:21:14 compute-0 podman[106906]: 2025-12-11 09:21:14.515574099 +0000 UTC m=+0.473388613 container attach 7739e898a9a3824c7aad08cea7e1565afa3be5bda402db729178657ff79bb36d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_ride, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0)
Dec 11 09:21:15 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v118: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 426 B/s wr, 1 op/s
Dec 11 09:21:15 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:21:15 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:15 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:21:15.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:15 compute-0 lvm[107042]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 11 09:21:15 compute-0 lvm[107042]: VG ceph_vg0 finished
Dec 11 09:21:15 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:21:15 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:15 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:21:15.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:15 compute-0 sharp_ride[106951]: {}
Dec 11 09:21:15 compute-0 systemd[1]: libpod-7739e898a9a3824c7aad08cea7e1565afa3be5bda402db729178657ff79bb36d.scope: Deactivated successfully.
Dec 11 09:21:15 compute-0 systemd[1]: libpod-7739e898a9a3824c7aad08cea7e1565afa3be5bda402db729178657ff79bb36d.scope: Consumed 1.327s CPU time.
Dec 11 09:21:15 compute-0 podman[106906]: 2025-12-11 09:21:15.352754781 +0000 UTC m=+1.310569325 container died 7739e898a9a3824c7aad08cea7e1565afa3be5bda402db729178657ff79bb36d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_ride, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:21:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-02f121497d0e45ffdc5969b068729f767a96c39a869bf9019858f6b8673a6422-merged.mount: Deactivated successfully.
Dec 11 09:21:15 compute-0 podman[106906]: 2025-12-11 09:21:15.587636332 +0000 UTC m=+1.545450846 container remove 7739e898a9a3824c7aad08cea7e1565afa3be5bda402db729178657ff79bb36d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_ride, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid)
Dec 11 09:21:15 compute-0 ceph-mon[74426]: pgmap v118: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 426 B/s wr, 1 op/s
Dec 11 09:21:15 compute-0 systemd[1]: libpod-conmon-7739e898a9a3824c7aad08cea7e1565afa3be5bda402db729178657ff79bb36d.scope: Deactivated successfully.
Dec 11 09:21:15 compute-0 sudo[106746]: pam_unix(sudo:session): session closed for user root
Dec 11 09:21:15 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 11 09:21:15 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:21:15 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 11 09:21:15 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:21:15 compute-0 sudo[107056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 11 09:21:15 compute-0 sudo[107056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:21:15 compute-0 sudo[107056]: pam_unix(sudo:session): session closed for user root
Dec 11 09:21:16 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:21:16 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:21:16 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:21:17 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v119: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 426 B/s wr, 1 op/s
Dec 11 09:21:17 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:21:17 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:17 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:21:17.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:17 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:21:17 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:17 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:21:17.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:17 compute-0 sudo[107083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 11 09:21:17 compute-0 sudo[107083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:21:17 compute-0 sudo[107083]: pam_unix(sudo:session): session closed for user root
Dec 11 09:21:17 compute-0 sudo[107131]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upopztmxvsqueycsacajdikrmyhjvtkv ; /usr/bin/python3'
Dec 11 09:21:17 compute-0 sudo[107131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:21:17 compute-0 ceph-mon[74426]: pgmap v119: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 426 B/s wr, 1 op/s
Dec 11 09:21:17 compute-0 python3[107133]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps --daemon-type rgw --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:21:18 compute-0 podman[107134]: 2025-12-11 09:21:18.062476138 +0000 UTC m=+0.047001953 container create 9dd7713d6445590a7cdea118bdd507531d0980b34574cdddcf126651b7656da5 (image=quay.io/ceph/ceph:v19, name=admiring_swanson, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 11 09:21:18 compute-0 systemd[1]: Started libpod-conmon-9dd7713d6445590a7cdea118bdd507531d0980b34574cdddcf126651b7656da5.scope.
Dec 11 09:21:18 compute-0 podman[107134]: 2025-12-11 09:21:18.042108355 +0000 UTC m=+0.026634200 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:21:18 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:21:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/811a64aa5004ab462350a2a6ccc411ea221ae039723f5f27a2053798f20f617e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:21:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/811a64aa5004ab462350a2a6ccc411ea221ae039723f5f27a2053798f20f617e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:21:18 compute-0 podman[107134]: 2025-12-11 09:21:18.157646901 +0000 UTC m=+0.142172746 container init 9dd7713d6445590a7cdea118bdd507531d0980b34574cdddcf126651b7656da5 (image=quay.io/ceph/ceph:v19, name=admiring_swanson, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 11 09:21:18 compute-0 podman[107134]: 2025-12-11 09:21:18.168333543 +0000 UTC m=+0.152859388 container start 9dd7713d6445590a7cdea118bdd507531d0980b34574cdddcf126651b7656da5 (image=quay.io/ceph/ceph:v19, name=admiring_swanson, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec 11 09:21:18 compute-0 podman[107134]: 2025-12-11 09:21:18.172704189 +0000 UTC m=+0.157230024 container attach 9dd7713d6445590a7cdea118bdd507531d0980b34574cdddcf126651b7656da5 (image=quay.io/ceph/ceph:v19, name=admiring_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True)
Dec 11 09:21:18 compute-0 admiring_swanson[107149]: ERROR: invalid flag --daemon-type
Dec 11 09:21:18 compute-0 systemd[1]: libpod-9dd7713d6445590a7cdea118bdd507531d0980b34574cdddcf126651b7656da5.scope: Deactivated successfully.
Dec 11 09:21:18 compute-0 podman[107134]: 2025-12-11 09:21:18.224024576 +0000 UTC m=+0.208550391 container died 9dd7713d6445590a7cdea118bdd507531d0980b34574cdddcf126651b7656da5 (image=quay.io/ceph/ceph:v19, name=admiring_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Dec 11 09:21:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-811a64aa5004ab462350a2a6ccc411ea221ae039723f5f27a2053798f20f617e-merged.mount: Deactivated successfully.
Dec 11 09:21:18 compute-0 podman[107134]: 2025-12-11 09:21:18.271908296 +0000 UTC m=+0.256434111 container remove 9dd7713d6445590a7cdea118bdd507531d0980b34574cdddcf126651b7656da5 (image=quay.io/ceph/ceph:v19, name=admiring_swanson, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec 11 09:21:18 compute-0 systemd[1]: libpod-conmon-9dd7713d6445590a7cdea118bdd507531d0980b34574cdddcf126651b7656da5.scope: Deactivated successfully.
Dec 11 09:21:18 compute-0 sudo[107131]: pam_unix(sudo:session): session closed for user root
Dec 11 09:21:19 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v120: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 11 09:21:19 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:21:19 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:19 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:21:19.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:19 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:21:19 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:19 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:21:19.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:20 compute-0 ceph-mon[74426]: pgmap v120: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 11 09:21:20 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[106915]: 11/12/2025 09:21:20 : epoch 693a8d0a : compute-0 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Dec 11 09:21:20 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[106915]: 11/12/2025 09:21:20 : epoch 693a8d0a : compute-0 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Dec 11 09:21:20 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[106915]: 11/12/2025 09:21:20 : epoch 693a8d0a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 11 09:21:20 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[106915]: 11/12/2025 09:21:20 : epoch 693a8d0a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 11 09:21:20 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[106915]: 11/12/2025 09:21:20 : epoch 693a8d0a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 11 09:21:20 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[106915]: 11/12/2025 09:21:20 : epoch 693a8d0a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 11 09:21:20 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[106915]: 11/12/2025 09:21:20 : epoch 693a8d0a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 11 09:21:20 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[106915]: 11/12/2025 09:21:20 : epoch 693a8d0a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 11 09:21:20 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[106915]: 11/12/2025 09:21:20 : epoch 693a8d0a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 11 09:21:21 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[106915]: 11/12/2025 09:21:21 : epoch 693a8d0a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 11 09:21:21 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[106915]: 11/12/2025 09:21:21 : epoch 693a8d0a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 11 09:21:21 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[106915]: 11/12/2025 09:21:21 : epoch 693a8d0a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 11 09:21:21 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v121: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 938 B/s wr, 2 op/s
Dec 11 09:21:21 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:21:21 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:21 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:21:21.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:21 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:21:21 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:21 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:21:21.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:21 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:21:22 compute-0 ceph-mon[74426]: pgmap v121: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 938 B/s wr, 2 op/s
Dec 11 09:21:22 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-haproxy-nfs-cephfs-compute-0-qtoxfz[95274]: [WARNING] 344/092122 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 11 09:21:23 compute-0 ceph-mgr[74715]: [balancer INFO root] Optimize plan auto_2025-12-11_09:21:23
Dec 11 09:21:23 compute-0 ceph-mgr[74715]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 11 09:21:23 compute-0 ceph-mgr[74715]: [balancer INFO root] do_upmap
Dec 11 09:21:23 compute-0 ceph-mgr[74715]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', 'default.rgw.log', 'vms', '.nfs', '.mgr', 'cephfs.cephfs.data', 'default.rgw.meta', 'images', 'backups']
Dec 11 09:21:23 compute-0 ceph-mgr[74715]: [balancer INFO root] prepared 0/10 upmap changes
Dec 11 09:21:23 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v122: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 938 B/s wr, 2 op/s
Dec 11 09:21:23 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:21:23 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000031s ======
Dec 11 09:21:23 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:21:23.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 11 09:21:23 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:21:23 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:23 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:21:23.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:23 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 11 09:21:23 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 11 09:21:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] _maybe_adjust
Dec 11 09:21:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 11 09:21:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 11 09:21:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 11 09:21:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 11 09:21:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 11 09:21:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 11 09:21:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 11 09:21:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 11 09:21:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 11 09:21:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 11 09:21:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 11 09:21:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 11 09:21:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 11 09:21:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 11 09:21:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 11 09:21:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 11 09:21:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 11 09:21:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 11 09:21:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 11 09:21:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 11 09:21:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 11 09:21:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 11 09:21:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 11 09:21:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 32)
Dec 11 09:21:23 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 11 09:21:23 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] scanning for idle connections..
Dec 11 09:21:23 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] cleaning up connections: []
Dec 11 09:21:23 compute-0 ceph-mgr[74715]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 11 09:21:23 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 11 09:21:23 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 11 09:21:23 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 11 09:21:23 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 11 09:21:23 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] scanning for idle connections..
Dec 11 09:21:23 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] cleaning up connections: []
Dec 11 09:21:23 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] scanning for idle connections..
Dec 11 09:21:23 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] cleaning up connections: []
Dec 11 09:21:23 compute-0 ceph-mgr[74715]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 11 09:21:23 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 11 09:21:23 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 11 09:21:23 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 11 09:21:23 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 11 09:21:24 compute-0 ceph-mon[74426]: pgmap v122: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 938 B/s wr, 2 op/s
Dec 11 09:21:24 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: ::ffff:192.168.122.100 - - [11/Dec/2025:09:21:24] "GET /metrics HTTP/1.1" 200 48243 "" "Prometheus/2.51.0"
Dec 11 09:21:24 compute-0 ceph-mgr[74715]: [prometheus INFO cherrypy.access.139680037156848] ::ffff:192.168.122.100 - - [11/Dec/2025:09:21:24] "GET /metrics HTTP/1.1" 200 48243 "" "Prometheus/2.51.0"
Dec 11 09:21:25 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v123: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 1.7 KiB/s wr, 5 op/s
Dec 11 09:21:25 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:21:25 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:25 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:21:25.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:25 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:21:25 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000031s ======
Dec 11 09:21:25 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:21:25.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 11 09:21:26 compute-0 ceph-mon[74426]: pgmap v123: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 1.7 KiB/s wr, 5 op/s
Dec 11 09:21:26 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:21:27 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[106915]: 11/12/2025 09:21:27 : epoch 693a8d0a : compute-0 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000009:nfs.cephfs.2: -2
Dec 11 09:21:27 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[106915]: 11/12/2025 09:21:27 : epoch 693a8d0a : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 11 09:21:27 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[106915]: 11/12/2025 09:21:27 : epoch 693a8d0a : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec 11 09:21:27 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[106915]: 11/12/2025 09:21:27 : epoch 693a8d0a : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec 11 09:21:27 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[106915]: 11/12/2025 09:21:27 : epoch 693a8d0a : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec 11 09:21:27 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[106915]: 11/12/2025 09:21:27 : epoch 693a8d0a : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec 11 09:21:27 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[106915]: 11/12/2025 09:21:27 : epoch 693a8d0a : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec 11 09:21:27 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[106915]: 11/12/2025 09:21:27 : epoch 693a8d0a : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec 11 09:21:27 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[106915]: 11/12/2025 09:21:27 : epoch 693a8d0a : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 11 09:21:27 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[106915]: 11/12/2025 09:21:27 : epoch 693a8d0a : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 11 09:21:27 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[106915]: 11/12/2025 09:21:27 : epoch 693a8d0a : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 11 09:21:27 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[106915]: 11/12/2025 09:21:27 : epoch 693a8d0a : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec 11 09:21:27 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[106915]: 11/12/2025 09:21:27 : epoch 693a8d0a : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 11 09:21:27 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[106915]: 11/12/2025 09:21:27 : epoch 693a8d0a : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec 11 09:21:27 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[106915]: 11/12/2025 09:21:27 : epoch 693a8d0a : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec 11 09:21:27 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[106915]: 11/12/2025 09:21:27 : epoch 693a8d0a : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec 11 09:21:27 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[106915]: 11/12/2025 09:21:27 : epoch 693a8d0a : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec 11 09:21:27 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[106915]: 11/12/2025 09:21:27 : epoch 693a8d0a : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec 11 09:21:27 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[106915]: 11/12/2025 09:21:27 : epoch 693a8d0a : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec 11 09:21:27 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[106915]: 11/12/2025 09:21:27 : epoch 693a8d0a : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec 11 09:21:27 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[106915]: 11/12/2025 09:21:27 : epoch 693a8d0a : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec 11 09:21:27 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[106915]: 11/12/2025 09:21:27 : epoch 693a8d0a : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec 11 09:21:27 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[106915]: 11/12/2025 09:21:27 : epoch 693a8d0a : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec 11 09:21:27 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[106915]: 11/12/2025 09:21:27 : epoch 693a8d0a : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec 11 09:21:27 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[106915]: 11/12/2025 09:21:27 : epoch 693a8d0a : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec 11 09:21:27 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[106915]: 11/12/2025 09:21:27 : epoch 693a8d0a : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 11 09:21:27 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[106915]: 11/12/2025 09:21:27 : epoch 693a8d0a : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec 11 09:21:27 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[106915]: 11/12/2025 09:21:27 : epoch 693a8d0a : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 11 09:21:27 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v124: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Dec 11 09:21:27 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:21:27 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:27 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:21:27.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:27 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:21:27 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:27 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:21:27.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:27 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[106915]: 11/12/2025 09:21:27 : epoch 693a8d0a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc034000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:27 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[106915]: 11/12/2025 09:21:27 : epoch 693a8d0a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc028001c40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:28 compute-0 sudo[107228]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmagyrhtzfbcwdpjkyfegxyvzzznahiy ; /usr/bin/python3'
Dec 11 09:21:28 compute-0 sudo[107228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:21:28 compute-0 ceph-mon[74426]: pgmap v124: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Dec 11 09:21:28 compute-0 python3[107230]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps --daemon-type rgw --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:21:28 compute-0 podman[107231]: 2025-12-11 09:21:28.621803574 +0000 UTC m=+0.053648341 container create 36466ddfcba00b40d624126e241c7cf4c0f7b5b30115e8501504ab431680d444 (image=quay.io/ceph/ceph:v19, name=busy_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:21:28 compute-0 systemd[1]: Started libpod-conmon-36466ddfcba00b40d624126e241c7cf4c0f7b5b30115e8501504ab431680d444.scope.
Dec 11 09:21:28 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:21:28 compute-0 podman[107231]: 2025-12-11 09:21:28.596909099 +0000 UTC m=+0.028753887 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:21:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0704cbb0983710724e2868bf68bdb02388562b6ada4cc0d3d7d144472dc939f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:21:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0704cbb0983710724e2868bf68bdb02388562b6ada4cc0d3d7d144472dc939f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:21:28 compute-0 podman[107231]: 2025-12-11 09:21:28.707357526 +0000 UTC m=+0.139202323 container init 36466ddfcba00b40d624126e241c7cf4c0f7b5b30115e8501504ab431680d444 (image=quay.io/ceph/ceph:v19, name=busy_robinson, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:21:28 compute-0 podman[107231]: 2025-12-11 09:21:28.71582516 +0000 UTC m=+0.147669927 container start 36466ddfcba00b40d624126e241c7cf4c0f7b5b30115e8501504ab431680d444 (image=quay.io/ceph/ceph:v19, name=busy_robinson, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec 11 09:21:28 compute-0 podman[107231]: 2025-12-11 09:21:28.719122452 +0000 UTC m=+0.150967219 container attach 36466ddfcba00b40d624126e241c7cf4c0f7b5b30115e8501504ab431680d444 (image=quay.io/ceph/ceph:v19, name=busy_robinson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec 11 09:21:28 compute-0 busy_robinson[107246]: ERROR: invalid flag --daemon-type
Dec 11 09:21:28 compute-0 systemd[1]: libpod-36466ddfcba00b40d624126e241c7cf4c0f7b5b30115e8501504ab431680d444.scope: Deactivated successfully.
Dec 11 09:21:28 compute-0 podman[107231]: 2025-12-11 09:21:28.775887979 +0000 UTC m=+0.207732766 container died 36466ddfcba00b40d624126e241c7cf4c0f7b5b30115e8501504ab431680d444 (image=quay.io/ceph/ceph:v19, name=busy_robinson, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:21:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-a0704cbb0983710724e2868bf68bdb02388562b6ada4cc0d3d7d144472dc939f-merged.mount: Deactivated successfully.
Dec 11 09:21:28 compute-0 podman[107231]: 2025-12-11 09:21:28.821417125 +0000 UTC m=+0.253261892 container remove 36466ddfcba00b40d624126e241c7cf4c0f7b5b30115e8501504ab431680d444 (image=quay.io/ceph/ceph:v19, name=busy_robinson, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True)
Dec 11 09:21:28 compute-0 systemd[1]: libpod-conmon-36466ddfcba00b40d624126e241c7cf4c0f7b5b30115e8501504ab431680d444.scope: Deactivated successfully.
Dec 11 09:21:28 compute-0 sudo[107228]: pam_unix(sudo:session): session closed for user root
Dec 11 09:21:28 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[106915]: 11/12/2025 09:21:28 : epoch 693a8d0a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc008000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:29 compute-0 sudo[107302]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwcfrajsiapuhopeqioghgmwcmauthqe ; /usr/bin/python3'
Dec 11 09:21:29 compute-0 sudo[107302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:21:29 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v125: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Dec 11 09:21:29 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:21:29 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 11 09:21:29 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:21:29.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 11 09:21:29 compute-0 podman[107305]: 2025-12-11 09:21:29.279683757 +0000 UTC m=+0.060650159 container create 318f82db0695cf51ed83200d79613bb15144daee00aadb917d8f6d6433c421ab (image=quay.io/ceph/ceph:v19, name=intelligent_mclean, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 11 09:21:29 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:21:29 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:29 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:21:29.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:29 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[106915]: 11/12/2025 09:21:29 : epoch 693a8d0a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc034000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:29 compute-0 podman[107305]: 2025-12-11 09:21:29.255979699 +0000 UTC m=+0.036946121 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:21:29 compute-0 systemd[1]: Started libpod-conmon-318f82db0695cf51ed83200d79613bb15144daee00aadb917d8f6d6433c421ab.scope.
Dec 11 09:21:29 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:21:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6de73d5c73e049cfd86d45b8099a0e851173343e975b4d408e5028793b24eea6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:21:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6de73d5c73e049cfd86d45b8099a0e851173343e975b4d408e5028793b24eea6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:21:29 compute-0 podman[107305]: 2025-12-11 09:21:29.424896946 +0000 UTC m=+0.205863378 container init 318f82db0695cf51ed83200d79613bb15144daee00aadb917d8f6d6433c421ab (image=quay.io/ceph/ceph:v19, name=intelligent_mclean, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:21:29 compute-0 podman[107305]: 2025-12-11 09:21:29.435523897 +0000 UTC m=+0.216490299 container start 318f82db0695cf51ed83200d79613bb15144daee00aadb917d8f6d6433c421ab (image=quay.io/ceph/ceph:v19, name=intelligent_mclean, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Dec 11 09:21:29 compute-0 podman[107305]: 2025-12-11 09:21:29.439932833 +0000 UTC m=+0.220899235 container attach 318f82db0695cf51ed83200d79613bb15144daee00aadb917d8f6d6433c421ab (image=quay.io/ceph/ceph:v19, name=intelligent_mclean, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 11 09:21:29 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-haproxy-nfs-cephfs-compute-0-qtoxfz[95274]: [WARNING] 344/092129 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 11 09:21:29 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[106915]: 11/12/2025 09:21:29 : epoch 693a8d0a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc00c000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:29 compute-0 ceph-mon[74426]: pgmap v125: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Dec 11 09:21:30 compute-0 intelligent_mclean[107320]: could not fetch user info: no user info saved
Dec 11 09:21:30 compute-0 systemd[1]: libpod-318f82db0695cf51ed83200d79613bb15144daee00aadb917d8f6d6433c421ab.scope: Deactivated successfully.
Dec 11 09:21:30 compute-0 conmon[107320]: conmon 318f82db0695cf51ed83 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-318f82db0695cf51ed83200d79613bb15144daee00aadb917d8f6d6433c421ab.scope/container/memory.events
Dec 11 09:21:30 compute-0 podman[107305]: 2025-12-11 09:21:30.466810411 +0000 UTC m=+1.247776833 container died 318f82db0695cf51ed83200d79613bb15144daee00aadb917d8f6d6433c421ab (image=quay.io/ceph/ceph:v19, name=intelligent_mclean, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:21:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-6de73d5c73e049cfd86d45b8099a0e851173343e975b4d408e5028793b24eea6-merged.mount: Deactivated successfully.
Dec 11 09:21:30 compute-0 podman[107305]: 2025-12-11 09:21:30.519457689 +0000 UTC m=+1.300424091 container remove 318f82db0695cf51ed83200d79613bb15144daee00aadb917d8f6d6433c421ab (image=quay.io/ceph/ceph:v19, name=intelligent_mclean, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec 11 09:21:30 compute-0 systemd[1]: libpod-conmon-318f82db0695cf51ed83200d79613bb15144daee00aadb917d8f6d6433c421ab.scope: Deactivated successfully.
Dec 11 09:21:30 compute-0 sudo[107302]: pam_unix(sudo:session): session closed for user root
Dec 11 09:21:30 compute-0 sudo[107441]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hslswsotnucjgbkrqommbrhtyejpgfxd ; /usr/bin/python3'
Dec 11 09:21:30 compute-0 sudo[107441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:21:30 compute-0 python3[107443]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="glance" --display-name="Glance S3 User" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:21:30 compute-0 podman[107444]: 2025-12-11 09:21:30.941223284 +0000 UTC m=+0.054006081 container create 1701b94e492430a8725e0415c0cc3a69fe0b4445969ba12d7ddfd68da8738968 (image=quay.io/ceph/ceph:v19, name=inspiring_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec 11 09:21:30 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[106915]: 11/12/2025 09:21:30 : epoch 693a8d0a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc020001250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:30 compute-0 systemd[1]: Started libpod-conmon-1701b94e492430a8725e0415c0cc3a69fe0b4445969ba12d7ddfd68da8738968.scope.
Dec 11 09:21:31 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:21:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dde60723b308ab4667b05b63587a6885ccc5e7605563675f560479a01896b47/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:21:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dde60723b308ab4667b05b63587a6885ccc5e7605563675f560479a01896b47/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:21:31 compute-0 podman[107444]: 2025-12-11 09:21:30.919159257 +0000 UTC m=+0.031942064 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:21:31 compute-0 podman[107444]: 2025-12-11 09:21:31.024886697 +0000 UTC m=+0.137669504 container init 1701b94e492430a8725e0415c0cc3a69fe0b4445969ba12d7ddfd68da8738968 (image=quay.io/ceph/ceph:v19, name=inspiring_kapitsa, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec 11 09:21:31 compute-0 podman[107444]: 2025-12-11 09:21:31.030564874 +0000 UTC m=+0.143347661 container start 1701b94e492430a8725e0415c0cc3a69fe0b4445969ba12d7ddfd68da8738968 (image=quay.io/ceph/ceph:v19, name=inspiring_kapitsa, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:21:31 compute-0 podman[107444]: 2025-12-11 09:21:31.035043314 +0000 UTC m=+0.147826101 container attach 1701b94e492430a8725e0415c0cc3a69fe0b4445969ba12d7ddfd68da8738968 (image=quay.io/ceph/ceph:v19, name=inspiring_kapitsa, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:21:31 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v126: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Dec 11 09:21:31 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:21:31 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000031s ======
Dec 11 09:21:31 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:21:31.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 11 09:21:31 compute-0 inspiring_kapitsa[107460]: {
Dec 11 09:21:31 compute-0 inspiring_kapitsa[107460]:     "user_id": "glance",
Dec 11 09:21:31 compute-0 inspiring_kapitsa[107460]:     "display_name": "Glance S3 User",
Dec 11 09:21:31 compute-0 inspiring_kapitsa[107460]:     "email": "",
Dec 11 09:21:31 compute-0 inspiring_kapitsa[107460]:     "suspended": 0,
Dec 11 09:21:31 compute-0 inspiring_kapitsa[107460]:     "max_buckets": 1000,
Dec 11 09:21:31 compute-0 inspiring_kapitsa[107460]:     "subusers": [],
Dec 11 09:21:31 compute-0 inspiring_kapitsa[107460]:     "keys": [
Dec 11 09:21:31 compute-0 inspiring_kapitsa[107460]:         {
Dec 11 09:21:31 compute-0 inspiring_kapitsa[107460]:             "user": "glance",
Dec 11 09:21:31 compute-0 inspiring_kapitsa[107460]:             "access_key": "GNR1MR1BYF7825JFD351",
Dec 11 09:21:31 compute-0 inspiring_kapitsa[107460]:             "secret_key": "mFuATK0qg8AImsiOA1tIglqaIU1BY6LWpBECm8x6",
Dec 11 09:21:31 compute-0 inspiring_kapitsa[107460]:             "active": true,
Dec 11 09:21:31 compute-0 inspiring_kapitsa[107460]:             "create_date": "2025-12-11T09:21:31.201006Z"
Dec 11 09:21:31 compute-0 inspiring_kapitsa[107460]:         }
Dec 11 09:21:31 compute-0 inspiring_kapitsa[107460]:     ],
Dec 11 09:21:31 compute-0 inspiring_kapitsa[107460]:     "swift_keys": [],
Dec 11 09:21:31 compute-0 inspiring_kapitsa[107460]:     "caps": [],
Dec 11 09:21:31 compute-0 inspiring_kapitsa[107460]:     "op_mask": "read, write, delete",
Dec 11 09:21:31 compute-0 inspiring_kapitsa[107460]:     "default_placement": "",
Dec 11 09:21:31 compute-0 inspiring_kapitsa[107460]:     "default_storage_class": "",
Dec 11 09:21:31 compute-0 inspiring_kapitsa[107460]:     "placement_tags": [],
Dec 11 09:21:31 compute-0 inspiring_kapitsa[107460]:     "bucket_quota": {
Dec 11 09:21:31 compute-0 inspiring_kapitsa[107460]:         "enabled": false,
Dec 11 09:21:31 compute-0 inspiring_kapitsa[107460]:         "check_on_raw": false,
Dec 11 09:21:31 compute-0 inspiring_kapitsa[107460]:         "max_size": -1,
Dec 11 09:21:31 compute-0 inspiring_kapitsa[107460]:         "max_size_kb": 0,
Dec 11 09:21:31 compute-0 inspiring_kapitsa[107460]:         "max_objects": -1
Dec 11 09:21:31 compute-0 inspiring_kapitsa[107460]:     },
Dec 11 09:21:31 compute-0 inspiring_kapitsa[107460]:     "user_quota": {
Dec 11 09:21:31 compute-0 inspiring_kapitsa[107460]:         "enabled": false,
Dec 11 09:21:31 compute-0 inspiring_kapitsa[107460]:         "check_on_raw": false,
Dec 11 09:21:31 compute-0 inspiring_kapitsa[107460]:         "max_size": -1,
Dec 11 09:21:31 compute-0 inspiring_kapitsa[107460]:         "max_size_kb": 0,
Dec 11 09:21:31 compute-0 inspiring_kapitsa[107460]:         "max_objects": -1
Dec 11 09:21:31 compute-0 inspiring_kapitsa[107460]:     },
Dec 11 09:21:31 compute-0 inspiring_kapitsa[107460]:     "temp_url_keys": [],
Dec 11 09:21:31 compute-0 inspiring_kapitsa[107460]:     "type": "rgw",
Dec 11 09:21:31 compute-0 inspiring_kapitsa[107460]:     "mfa_ids": [],
Dec 11 09:21:31 compute-0 inspiring_kapitsa[107460]:     "account_id": "",
Dec 11 09:21:31 compute-0 inspiring_kapitsa[107460]:     "path": "/",
Dec 11 09:21:31 compute-0 inspiring_kapitsa[107460]:     "create_date": "2025-12-11T09:21:31.200037Z",
Dec 11 09:21:31 compute-0 inspiring_kapitsa[107460]:     "tags": [],
Dec 11 09:21:31 compute-0 inspiring_kapitsa[107460]:     "group_ids": []
Dec 11 09:21:31 compute-0 inspiring_kapitsa[107460]: }
Dec 11 09:21:31 compute-0 inspiring_kapitsa[107460]: 
Dec 11 09:21:31 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:21:31 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:31 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:21:31.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:31 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[106915]: 11/12/2025 09:21:31 : epoch 693a8d0a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0080016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:31 compute-0 systemd[1]: libpod-1701b94e492430a8725e0415c0cc3a69fe0b4445969ba12d7ddfd68da8738968.scope: Deactivated successfully.
Dec 11 09:21:31 compute-0 podman[107444]: 2025-12-11 09:21:31.323538532 +0000 UTC m=+0.436321329 container died 1701b94e492430a8725e0415c0cc3a69fe0b4445969ba12d7ddfd68da8738968 (image=quay.io/ceph/ceph:v19, name=inspiring_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 11 09:21:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-8dde60723b308ab4667b05b63587a6885ccc5e7605563675f560479a01896b47-merged.mount: Deactivated successfully.
Dec 11 09:21:31 compute-0 podman[107444]: 2025-12-11 09:21:31.451744051 +0000 UTC m=+0.564526858 container remove 1701b94e492430a8725e0415c0cc3a69fe0b4445969ba12d7ddfd68da8738968 (image=quay.io/ceph/ceph:v19, name=inspiring_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec 11 09:21:31 compute-0 systemd[1]: libpod-conmon-1701b94e492430a8725e0415c0cc3a69fe0b4445969ba12d7ddfd68da8738968.scope: Deactivated successfully.
Dec 11 09:21:31 compute-0 sudo[107441]: pam_unix(sudo:session): session closed for user root
Dec 11 09:21:31 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:21:31 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[106915]: 11/12/2025 09:21:31 : epoch 693a8d0a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0340021f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:31 compute-0 sudo[107584]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgpsunrfnqxbelfknyuojyvttpzvczbh ; /usr/bin/python3'
Dec 11 09:21:31 compute-0 sudo[107584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:21:31 compute-0 podman[107587]: 2025-12-11 09:21:31.836095223 +0000 UTC m=+0.033416551 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 11 09:21:31 compute-0 podman[107587]: 2025-12-11 09:21:31.936453356 +0000 UTC m=+0.133774664 container create 44abc48ec75a2c60c8efa25fdc6800eccc670026a8a74423540221912597004f (image=quay.io/ceph/ceph:v19, name=vibrant_raman, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 11 09:21:32 compute-0 systemd[1]: Started libpod-conmon-44abc48ec75a2c60c8efa25fdc6800eccc670026a8a74423540221912597004f.scope.
Dec 11 09:21:32 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:21:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f229395ac4eae938574e4680b080990d213de194dba2f8a7367a2c5f18cb07cc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:21:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f229395ac4eae938574e4680b080990d213de194dba2f8a7367a2c5f18cb07cc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:21:32 compute-0 podman[107587]: 2025-12-11 09:21:32.200564925 +0000 UTC m=+0.397886253 container init 44abc48ec75a2c60c8efa25fdc6800eccc670026a8a74423540221912597004f (image=quay.io/ceph/ceph:v19, name=vibrant_raman, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:21:32 compute-0 podman[107587]: 2025-12-11 09:21:32.211008051 +0000 UTC m=+0.408329359 container start 44abc48ec75a2c60c8efa25fdc6800eccc670026a8a74423540221912597004f (image=quay.io/ceph/ceph:v19, name=vibrant_raman, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 11 09:21:32 compute-0 podman[107587]: 2025-12-11 09:21:32.279855832 +0000 UTC m=+0.477177140 container attach 44abc48ec75a2c60c8efa25fdc6800eccc670026a8a74423540221912597004f (image=quay.io/ceph/ceph:v19, name=vibrant_raman, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:21:32 compute-0 ceph-mon[74426]: pgmap v126: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Dec 11 09:21:32 compute-0 vibrant_raman[107602]: {
Dec 11 09:21:32 compute-0 vibrant_raman[107602]:     "user_id": "glance",
Dec 11 09:21:32 compute-0 vibrant_raman[107602]:     "display_name": "Glance S3 User",
Dec 11 09:21:32 compute-0 vibrant_raman[107602]:     "email": "",
Dec 11 09:21:32 compute-0 vibrant_raman[107602]:     "suspended": 0,
Dec 11 09:21:32 compute-0 vibrant_raman[107602]:     "max_buckets": 1000,
Dec 11 09:21:32 compute-0 vibrant_raman[107602]:     "subusers": [],
Dec 11 09:21:32 compute-0 vibrant_raman[107602]:     "keys": [
Dec 11 09:21:32 compute-0 vibrant_raman[107602]:         {
Dec 11 09:21:32 compute-0 vibrant_raman[107602]:             "user": "glance",
Dec 11 09:21:32 compute-0 vibrant_raman[107602]:             "access_key": "GNR1MR1BYF7825JFD351",
Dec 11 09:21:32 compute-0 vibrant_raman[107602]:             "secret_key": "mFuATK0qg8AImsiOA1tIglqaIU1BY6LWpBECm8x6",
Dec 11 09:21:32 compute-0 vibrant_raman[107602]:             "active": true,
Dec 11 09:21:32 compute-0 vibrant_raman[107602]:             "create_date": "2025-12-11T09:21:31.201006Z"
Dec 11 09:21:32 compute-0 vibrant_raman[107602]:         }
Dec 11 09:21:32 compute-0 vibrant_raman[107602]:     ],
Dec 11 09:21:32 compute-0 vibrant_raman[107602]:     "swift_keys": [],
Dec 11 09:21:32 compute-0 vibrant_raman[107602]:     "caps": [],
Dec 11 09:21:32 compute-0 vibrant_raman[107602]:     "op_mask": "read, write, delete",
Dec 11 09:21:32 compute-0 vibrant_raman[107602]:     "default_placement": "",
Dec 11 09:21:32 compute-0 vibrant_raman[107602]:     "default_storage_class": "",
Dec 11 09:21:32 compute-0 vibrant_raman[107602]:     "placement_tags": [],
Dec 11 09:21:32 compute-0 vibrant_raman[107602]:     "bucket_quota": {
Dec 11 09:21:32 compute-0 vibrant_raman[107602]:         "enabled": false,
Dec 11 09:21:32 compute-0 vibrant_raman[107602]:         "check_on_raw": false,
Dec 11 09:21:32 compute-0 vibrant_raman[107602]:         "max_size": -1,
Dec 11 09:21:32 compute-0 vibrant_raman[107602]:         "max_size_kb": 0,
Dec 11 09:21:32 compute-0 vibrant_raman[107602]:         "max_objects": -1
Dec 11 09:21:32 compute-0 vibrant_raman[107602]:     },
Dec 11 09:21:32 compute-0 vibrant_raman[107602]:     "user_quota": {
Dec 11 09:21:32 compute-0 vibrant_raman[107602]:         "enabled": false,
Dec 11 09:21:32 compute-0 vibrant_raman[107602]:         "check_on_raw": false,
Dec 11 09:21:32 compute-0 vibrant_raman[107602]:         "max_size": -1,
Dec 11 09:21:32 compute-0 vibrant_raman[107602]:         "max_size_kb": 0,
Dec 11 09:21:32 compute-0 vibrant_raman[107602]:         "max_objects": -1
Dec 11 09:21:32 compute-0 vibrant_raman[107602]:     },
Dec 11 09:21:32 compute-0 vibrant_raman[107602]:     "temp_url_keys": [],
Dec 11 09:21:32 compute-0 vibrant_raman[107602]:     "type": "rgw",
Dec 11 09:21:32 compute-0 vibrant_raman[107602]:     "mfa_ids": [],
Dec 11 09:21:32 compute-0 vibrant_raman[107602]:     "account_id": "",
Dec 11 09:21:32 compute-0 vibrant_raman[107602]:     "path": "/",
Dec 11 09:21:32 compute-0 vibrant_raman[107602]:     "create_date": "2025-12-11T09:21:31.200037Z",
Dec 11 09:21:32 compute-0 vibrant_raman[107602]:     "tags": [],
Dec 11 09:21:32 compute-0 vibrant_raman[107602]:     "group_ids": []
Dec 11 09:21:32 compute-0 vibrant_raman[107602]: }
Dec 11 09:21:32 compute-0 vibrant_raman[107602]: 
Dec 11 09:21:32 compute-0 systemd[1]: libpod-44abc48ec75a2c60c8efa25fdc6800eccc670026a8a74423540221912597004f.scope: Deactivated successfully.
Dec 11 09:21:32 compute-0 podman[107587]: 2025-12-11 09:21:32.788821021 +0000 UTC m=+0.986142329 container died 44abc48ec75a2c60c8efa25fdc6800eccc670026a8a74423540221912597004f (image=quay.io/ceph/ceph:v19, name=vibrant_raman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325)
Dec 11 09:21:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-f229395ac4eae938574e4680b080990d213de194dba2f8a7367a2c5f18cb07cc-merged.mount: Deactivated successfully.
Dec 11 09:21:32 compute-0 podman[107587]: 2025-12-11 09:21:32.926673511 +0000 UTC m=+1.123994819 container remove 44abc48ec75a2c60c8efa25fdc6800eccc670026a8a74423540221912597004f (image=quay.io/ceph/ceph:v19, name=vibrant_raman, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:21:32 compute-0 systemd[1]: libpod-conmon-44abc48ec75a2c60c8efa25fdc6800eccc670026a8a74423540221912597004f.scope: Deactivated successfully.
Dec 11 09:21:32 compute-0 sudo[107584]: pam_unix(sudo:session): session closed for user root
Dec 11 09:21:32 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[106915]: 11/12/2025 09:21:32 : epoch 693a8d0a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc00c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:33 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v127: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Dec 11 09:21:33 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:21:33 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:33 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:21:33.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:33 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:21:33 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 11 09:21:33 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:21:33.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 11 09:21:33 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[106915]: 11/12/2025 09:21:33 : epoch 693a8d0a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc020001d70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:33 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[106915]: 11/12/2025 09:21:33 : epoch 693a8d0a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0080016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:34 compute-0 ceph-mon[74426]: pgmap v127: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Dec 11 09:21:34 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: ::ffff:192.168.122.100 - - [11/Dec/2025:09:21:34] "GET /metrics HTTP/1.1" 200 48243 "" "Prometheus/2.51.0"
Dec 11 09:21:34 compute-0 ceph-mgr[74715]: [prometheus INFO cherrypy.access.139680037156848] ::ffff:192.168.122.100 - - [11/Dec/2025:09:21:34] "GET /metrics HTTP/1.1" 200 48243 "" "Prometheus/2.51.0"
Dec 11 09:21:34 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[106915]: 11/12/2025 09:21:34 : epoch 693a8d0a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0340021f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:35 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v128: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 5.6 KiB/s rd, 938 B/s wr, 6 op/s
Dec 11 09:21:35 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:21:35 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:35 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:21:35.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:35 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:21:35 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:35 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:21:35.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:35 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[106915]: 11/12/2025 09:21:35 : epoch 693a8d0a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc00c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:35 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[106915]: 11/12/2025 09:21:35 : epoch 693a8d0a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc020001d70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:36 compute-0 ceph-mon[74426]: pgmap v128: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 5.6 KiB/s rd, 938 B/s wr, 6 op/s
Dec 11 09:21:36 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:21:36 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[106915]: 11/12/2025 09:21:36 : epoch 693a8d0a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0080016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:37 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v129: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 170 B/s wr, 3 op/s
Dec 11 09:21:37 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:21:37 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 11 09:21:37 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:21:37.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 11 09:21:37 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:21:37 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:37 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:21:37.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:37 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[106915]: 11/12/2025 09:21:37 : epoch 693a8d0a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0340021f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:37 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[106915]: 11/12/2025 09:21:37 : epoch 693a8d0a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0340021f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:37 compute-0 sudo[107708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 11 09:21:37 compute-0 sudo[107708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:21:37 compute-0 sudo[107708]: pam_unix(sudo:session): session closed for user root
Dec 11 09:21:38 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 11 09:21:38 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 11 09:21:38 compute-0 ceph-mon[74426]: pgmap v129: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 170 B/s wr, 3 op/s
Dec 11 09:21:38 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 11 09:21:38 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[106915]: 11/12/2025 09:21:38 : epoch 693a8d0a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc020001d70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:39 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v130: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.8 KiB/s rd, 170 B/s wr, 4 op/s
Dec 11 09:21:39 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:21:39 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:39 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:21:39.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:39 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[106915]: 11/12/2025 09:21:39 : epoch 693a8d0a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc008002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:39 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:21:39 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000031s ======
Dec 11 09:21:39 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:21:39.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 11 09:21:39 compute-0 ceph-mon[74426]: pgmap v130: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.8 KiB/s rd, 170 B/s wr, 4 op/s
Dec 11 09:21:39 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[106915]: 11/12/2025 09:21:39 : epoch 693a8d0a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc00c002720 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:40 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[106915]: 11/12/2025 09:21:40 : epoch 693a8d0a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc00c002720 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:41 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v131: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 170 B/s wr, 3 op/s
Dec 11 09:21:41 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:21:41 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:41 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:21:41.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:41 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[106915]: 11/12/2025 09:21:41 : epoch 693a8d0a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc020003200 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:41 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:21:41 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 11 09:21:41 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:21:41.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 11 09:21:41 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:21:41 compute-0 kernel: ganesha.nfsd[107193]: segfault at 50 ip 00007fc0b523532e sp 00007fc02f7fd210 error 4 in libntirpc.so.5.8[7fc0b521a000+2c000] likely on CPU 6 (core 0, socket 6)
Dec 11 09:21:41 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec 11 09:21:41 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[106915]: 11/12/2025 09:21:41 : epoch 693a8d0a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc020003200 fd 39 proxy ignored for local
Dec 11 09:21:41 compute-0 systemd[1]: Started Process Core Dump (PID 107737/UID 0).
Dec 11 09:21:42 compute-0 ceph-mon[74426]: pgmap v131: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 170 B/s wr, 3 op/s
Dec 11 09:21:42 compute-0 systemd-coredump[107738]: Process 106923 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 43:
                                                    #0  0x00007fc0b523532e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Dec 11 09:21:42 compute-0 systemd[1]: systemd-coredump@1-107737-0.service: Deactivated successfully.
Dec 11 09:21:42 compute-0 systemd[1]: systemd-coredump@1-107737-0.service: Consumed 1.309s CPU time.
Dec 11 09:21:43 compute-0 podman[107744]: 2025-12-11 09:21:43.024946389 +0000 UTC m=+0.033763271 container died 18ad14b9cd928a45ce4e864db1fb0529242cd12e88a89b171140393a4d5cd3a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:21:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-568004864dcaa8ecdf83d46733e0df5cf01151fc9ec5c27bf3177ab929ab045c-merged.mount: Deactivated successfully.
Dec 11 09:21:43 compute-0 podman[107744]: 2025-12-11 09:21:43.071508578 +0000 UTC m=+0.080325460 container remove 18ad14b9cd928a45ce4e864db1fb0529242cd12e88a89b171140393a4d5cd3a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:21:43 compute-0 systemd[1]: ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060@nfs.cephfs.2.0.compute-0.iryjby.service: Main process exited, code=exited, status=139/n/a
Dec 11 09:21:43 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v132: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 170 B/s wr, 3 op/s
Dec 11 09:21:43 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:21:43 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000031s ======
Dec 11 09:21:43 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:21:43.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 11 09:21:43 compute-0 systemd[1]: ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060@nfs.cephfs.2.0.compute-0.iryjby.service: Failed with result 'exit-code'.
Dec 11 09:21:43 compute-0 systemd[1]: ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060@nfs.cephfs.2.0.compute-0.iryjby.service: Consumed 1.695s CPU time.
Dec 11 09:21:43 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:21:43 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:43 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:21:43.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:44 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: ::ffff:192.168.122.100 - - [11/Dec/2025:09:21:44] "GET /metrics HTTP/1.1" 200 48282 "" "Prometheus/2.51.0"
Dec 11 09:21:44 compute-0 ceph-mgr[74715]: [prometheus INFO cherrypy.access.139680037156848] ::ffff:192.168.122.100 - - [11/Dec/2025:09:21:44] "GET /metrics HTTP/1.1" 200 48282 "" "Prometheus/2.51.0"
Dec 11 09:21:44 compute-0 ceph-mon[74426]: pgmap v132: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 170 B/s wr, 3 op/s
Dec 11 09:21:45 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v133: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 170 B/s wr, 3 op/s
Dec 11 09:21:45 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:21:45 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:45 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:21:45.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:45 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:21:45 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:45 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:21:45.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:45 compute-0 ceph-mon[74426]: pgmap v133: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 170 B/s wr, 3 op/s
Dec 11 09:21:46 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:21:47 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v134: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:21:47 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:21:47 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:47 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:21:47.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:47 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:21:47 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:47 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:21:47.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:47 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-haproxy-nfs-cephfs-compute-0-qtoxfz[95274]: [WARNING] 344/092147 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 11 09:21:48 compute-0 ceph-mon[74426]: pgmap v134: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:21:49 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v135: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 11 09:21:49 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:21:49 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:49 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:21:49.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:49 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:21:49 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 11 09:21:49 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:21:49.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 11 09:21:49 compute-0 ceph-mon[74426]: pgmap v135: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 11 09:21:51 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v136: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 11 09:21:51 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:21:51 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 11 09:21:51 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:21:51.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 11 09:21:51 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:21:51 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 11 09:21:51 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:21:51.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 11 09:21:51 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:21:52 compute-0 ceph-mon[74426]: pgmap v136: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 11 09:21:53 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v137: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 11 09:21:53 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:21:53 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:53 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:21:53.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:53 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 11 09:21:53 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 11 09:21:53 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:21:53 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:53 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:21:53.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:53 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] scanning for idle connections..
Dec 11 09:21:53 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] cleaning up connections: []
Dec 11 09:21:53 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] scanning for idle connections..
Dec 11 09:21:53 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] cleaning up connections: []
Dec 11 09:21:53 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] scanning for idle connections..
Dec 11 09:21:53 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] cleaning up connections: []
Dec 11 09:21:53 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 11 09:21:53 compute-0 systemd[1]: ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060@nfs.cephfs.2.0.compute-0.iryjby.service: Scheduled restart job, restart counter is at 2.
Dec 11 09:21:53 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.iryjby for 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060.
Dec 11 09:21:53 compute-0 systemd[1]: ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060@nfs.cephfs.2.0.compute-0.iryjby.service: Consumed 1.695s CPU time.
Dec 11 09:21:53 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.iryjby for 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060...
Dec 11 09:21:53 compute-0 podman[107850]: 2025-12-11 09:21:53.770056597 +0000 UTC m=+0.045930841 container create 0419a5d7a570af397d527bad438de2b8b703491db5b5997ab6e1bf6be23d6dcf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 11 09:21:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffc6f6553de47b9a3ea5bb295a25f5118f620e924c33a0086c99f6a5d42eddbc/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec 11 09:21:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffc6f6553de47b9a3ea5bb295a25f5118f620e924c33a0086c99f6a5d42eddbc/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec 11 09:21:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffc6f6553de47b9a3ea5bb295a25f5118f620e924c33a0086c99f6a5d42eddbc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:21:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffc6f6553de47b9a3ea5bb295a25f5118f620e924c33a0086c99f6a5d42eddbc/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.iryjby-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec 11 09:21:53 compute-0 podman[107850]: 2025-12-11 09:21:53.840739827 +0000 UTC m=+0.116614091 container init 0419a5d7a570af397d527bad438de2b8b703491db5b5997ab6e1bf6be23d6dcf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Dec 11 09:21:53 compute-0 podman[107850]: 2025-12-11 09:21:53.749005961 +0000 UTC m=+0.024880205 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:21:53 compute-0 podman[107850]: 2025-12-11 09:21:53.846801394 +0000 UTC m=+0.122675638 container start 0419a5d7a570af397d527bad438de2b8b703491db5b5997ab6e1bf6be23d6dcf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 11 09:21:53 compute-0 bash[107850]: 0419a5d7a570af397d527bad438de2b8b703491db5b5997ab6e1bf6be23d6dcf
Dec 11 09:21:53 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:21:53 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec 11 09:21:53 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.iryjby for 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060.
Dec 11 09:21:53 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:21:53 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec 11 09:21:53 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:21:53 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec 11 09:21:53 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:21:53 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec 11 09:21:53 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:21:53 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec 11 09:21:53 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:21:53 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec 11 09:21:53 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:21:53 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec 11 09:21:53 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:21:53 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 11 09:21:54 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: ::ffff:192.168.122.100 - - [11/Dec/2025:09:21:54] "GET /metrics HTTP/1.1" 200 48277 "" "Prometheus/2.51.0"
Dec 11 09:21:54 compute-0 ceph-mgr[74715]: [prometheus INFO cherrypy.access.139680037156848] ::ffff:192.168.122.100 - - [11/Dec/2025:09:21:54] "GET /metrics HTTP/1.1" 200 48277 "" "Prometheus/2.51.0"
Dec 11 09:21:54 compute-0 ceph-mon[74426]: pgmap v137: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 11 09:21:55 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v138: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 11 09:21:55 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:21:55 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.002000063s ======
Dec 11 09:21:55 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:21:55.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000063s
Dec 11 09:21:55 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:21:55 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000031s ======
Dec 11 09:21:55 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:21:55.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 11 09:21:55 compute-0 ceph-mon[74426]: pgmap v138: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 11 09:21:56 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:21:57 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v139: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 11 09:21:57 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:21:57 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:57 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:21:57.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:57 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:21:57 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:57 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:21:57.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:57 compute-0 sudo[107911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 11 09:21:57 compute-0 sudo[107911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:21:57 compute-0 sudo[107911]: pam_unix(sudo:session): session closed for user root
Dec 11 09:21:57 compute-0 ceph-mon[74426]: pgmap v139: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 11 09:21:59 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v140: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 597 B/s wr, 2 op/s
Dec 11 09:21:59 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:21:59 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:59 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:21:59.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:59 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:21:59 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:59 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:21:59.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:00 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:00 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 11 09:22:00 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:00 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 11 09:22:00 compute-0 ceph-mon[74426]: pgmap v140: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 597 B/s wr, 2 op/s
Dec 11 09:22:01 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v141: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec 11 09:22:01 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:22:01 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000031s ======
Dec 11 09:22:01 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:22:01.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 11 09:22:01 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:22:01 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000031s ======
Dec 11 09:22:01 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:22:01.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 11 09:22:01 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:22:01 compute-0 ceph-mon[74426]: pgmap v141: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec 11 09:22:03 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v142: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec 11 09:22:03 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:22:03 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:03 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:22:03.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:03 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:22:03 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:03 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:22:03.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:04 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: ::ffff:192.168.122.100 - - [11/Dec/2025:09:22:04] "GET /metrics HTTP/1.1" 200 48277 "" "Prometheus/2.51.0"
Dec 11 09:22:04 compute-0 ceph-mgr[74715]: [prometheus INFO cherrypy.access.139680037156848] ::ffff:192.168.122.100 - - [11/Dec/2025:09:22:04] "GET /metrics HTTP/1.1" 200 48277 "" "Prometheus/2.51.0"
Dec 11 09:22:04 compute-0 ceph-mon[74426]: pgmap v142: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec 11 09:22:05 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v143: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 1023 B/s wr, 4 op/s
Dec 11 09:22:05 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:22:05 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:05 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:22:05.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:05 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:22:05 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:05 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:22:05.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:05 compute-0 ceph-mon[74426]: pgmap v143: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 1023 B/s wr, 4 op/s
Dec 11 09:22:06 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:22:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:06 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 11 09:22:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:06 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec 11 09:22:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:06 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec 11 09:22:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:06 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec 11 09:22:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:06 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec 11 09:22:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:06 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec 11 09:22:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:06 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec 11 09:22:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:06 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 11 09:22:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:06 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 11 09:22:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:06 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 11 09:22:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:06 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec 11 09:22:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:06 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 11 09:22:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:06 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec 11 09:22:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:06 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec 11 09:22:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:06 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec 11 09:22:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:06 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec 11 09:22:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:06 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec 11 09:22:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:06 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec 11 09:22:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:06 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec 11 09:22:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:06 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec 11 09:22:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:06 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec 11 09:22:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:06 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec 11 09:22:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:06 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec 11 09:22:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:06 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec 11 09:22:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:06 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 11 09:22:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:06 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec 11 09:22:06 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:06 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 11 09:22:07 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:07 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fda14000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:07 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v144: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Dec 11 09:22:07 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:22:07 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:07 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:22:07.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:07 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:07 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fda04000da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:07 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:22:07 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:07 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:22:07.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:07 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:07 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9fc000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:08 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 11 09:22:08 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 11 09:22:08 compute-0 ceph-mon[74426]: pgmap v144: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Dec 11 09:22:08 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 11 09:22:09 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:09 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9f0000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:09 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v145: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 938 B/s wr, 3 op/s
Dec 11 09:22:09 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:22:09 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:09 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:22:09.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:09 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:09 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9f8000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:09 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:22:09 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 11 09:22:09 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:22:09.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 11 09:22:09 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-haproxy-nfs-cephfs-compute-0-qtoxfz[95274]: [WARNING] 344/092209 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 11 09:22:09 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:09 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fda04000da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:10 compute-0 ceph-mon[74426]: pgmap v145: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 938 B/s wr, 3 op/s
Dec 11 09:22:11 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:11 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9fc0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:11 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v146: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Dec 11 09:22:11 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:22:11 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:11 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:22:11.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:11 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:11 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9f00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:11 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:22:11 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:11 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:22:11.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:11 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:22:11 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:11 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9f8001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:12 compute-0 ceph-mon[74426]: pgmap v146: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Dec 11 09:22:13 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:13 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fda04001ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:13 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v147: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Dec 11 09:22:13 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:22:13 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000031s ======
Dec 11 09:22:13 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:22:13.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 11 09:22:13 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:13 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9fc0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:13 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:22:13 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:13 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:22:13.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:13 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:13 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9f00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:13 compute-0 ceph-mon[74426]: pgmap v147: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Dec 11 09:22:14 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: ::ffff:192.168.122.100 - - [11/Dec/2025:09:22:14] "GET /metrics HTTP/1.1" 200 48284 "" "Prometheus/2.51.0"
Dec 11 09:22:14 compute-0 ceph-mgr[74715]: [prometheus INFO cherrypy.access.139680037156848] ::ffff:192.168.122.100 - - [11/Dec/2025:09:22:14] "GET /metrics HTTP/1.1" 200 48284 "" "Prometheus/2.51.0"
Dec 11 09:22:15 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:15 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9f8001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:15 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v148: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 426 B/s wr, 2 op/s
Dec 11 09:22:15 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:22:15 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 11 09:22:15 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:22:15.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 11 09:22:15 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:15 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fda04001ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:15 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:22:15 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:15 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:22:15.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:15 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:15 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9fc0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:16 compute-0 sudo[107970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:22:16 compute-0 sudo[107970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:22:16 compute-0 sudo[107970]: pam_unix(sudo:session): session closed for user root
Dec 11 09:22:16 compute-0 sudo[107995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 11 09:22:16 compute-0 sudo[107995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:22:16 compute-0 ceph-mon[74426]: pgmap v148: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 426 B/s wr, 2 op/s
Dec 11 09:22:16 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:22:16 compute-0 sudo[107995]: pam_unix(sudo:session): session closed for user root
Dec 11 09:22:16 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 11 09:22:16 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:22:16 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 11 09:22:16 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 11 09:22:16 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 11 09:22:16 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:22:16 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 11 09:22:16 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:22:16 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 11 09:22:16 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 11 09:22:16 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 11 09:22:16 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 11 09:22:16 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 11 09:22:16 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:22:16 compute-0 sudo[108051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:22:16 compute-0 sudo[108051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:22:16 compute-0 sudo[108051]: pam_unix(sudo:session): session closed for user root
Dec 11 09:22:16 compute-0 sudo[108076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 11 09:22:16 compute-0 sudo[108076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:22:17 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:17 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9f00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:17 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v149: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec 11 09:22:17 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:22:17 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 11 09:22:17 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:22:17 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:22:17 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 11 09:22:17 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 11 09:22:17 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:22:17 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:22:17 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:17 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:22:17.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:17 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:17 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9f8001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:17 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:22:17 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.002000063s ======
Dec 11 09:22:17 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:22:17.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000063s
Dec 11 09:22:17 compute-0 podman[108141]: 2025-12-11 09:22:17.372753735 +0000 UTC m=+0.044497369 container create 5bd02bdaa134bc9b86c34c60998aab346bf8132586f6e7d619086f4ae7b61cf8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_meninsky, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 11 09:22:17 compute-0 systemd[1]: Started libpod-conmon-5bd02bdaa134bc9b86c34c60998aab346bf8132586f6e7d619086f4ae7b61cf8.scope.
Dec 11 09:22:17 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:22:17 compute-0 podman[108141]: 2025-12-11 09:22:17.352067919 +0000 UTC m=+0.023811583 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:22:17 compute-0 podman[108141]: 2025-12-11 09:22:17.457588702 +0000 UTC m=+0.129332336 container init 5bd02bdaa134bc9b86c34c60998aab346bf8132586f6e7d619086f4ae7b61cf8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_meninsky, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec 11 09:22:17 compute-0 podman[108141]: 2025-12-11 09:22:17.466206851 +0000 UTC m=+0.137950485 container start 5bd02bdaa134bc9b86c34c60998aab346bf8132586f6e7d619086f4ae7b61cf8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_meninsky, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:22:17 compute-0 podman[108141]: 2025-12-11 09:22:17.469455881 +0000 UTC m=+0.141199505 container attach 5bd02bdaa134bc9b86c34c60998aab346bf8132586f6e7d619086f4ae7b61cf8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_meninsky, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 11 09:22:17 compute-0 sharp_meninsky[108157]: 167 167
Dec 11 09:22:17 compute-0 systemd[1]: libpod-5bd02bdaa134bc9b86c34c60998aab346bf8132586f6e7d619086f4ae7b61cf8.scope: Deactivated successfully.
Dec 11 09:22:17 compute-0 podman[108141]: 2025-12-11 09:22:17.472216918 +0000 UTC m=+0.143960582 container died 5bd02bdaa134bc9b86c34c60998aab346bf8132586f6e7d619086f4ae7b61cf8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 11 09:22:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-229966a7548029fb28ffa7c25cad03d369930825d875ea08f0b06a15d2ecd004-merged.mount: Deactivated successfully.
Dec 11 09:22:17 compute-0 podman[108141]: 2025-12-11 09:22:17.51650131 +0000 UTC m=+0.188244944 container remove 5bd02bdaa134bc9b86c34c60998aab346bf8132586f6e7d619086f4ae7b61cf8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_meninsky, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:22:17 compute-0 systemd[1]: libpod-conmon-5bd02bdaa134bc9b86c34c60998aab346bf8132586f6e7d619086f4ae7b61cf8.scope: Deactivated successfully.
Dec 11 09:22:17 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:17 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fda04001ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:17 compute-0 podman[108178]: 2025-12-11 09:22:17.687483315 +0000 UTC m=+0.044176850 container create 82f8a46b38d8f6f47b78998db6c31e41520fa8c2ccea2fac521c3b13d6219018 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_hodgkin, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 11 09:22:17 compute-0 systemd[1]: Started libpod-conmon-82f8a46b38d8f6f47b78998db6c31e41520fa8c2ccea2fac521c3b13d6219018.scope.
Dec 11 09:22:17 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:22:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1482ee56463ad31f89e0a5db0448f34b766783cad1e44eb92d18912a58adbda3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 11 09:22:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1482ee56463ad31f89e0a5db0448f34b766783cad1e44eb92d18912a58adbda3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:22:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1482ee56463ad31f89e0a5db0448f34b766783cad1e44eb92d18912a58adbda3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:22:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1482ee56463ad31f89e0a5db0448f34b766783cad1e44eb92d18912a58adbda3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 11 09:22:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1482ee56463ad31f89e0a5db0448f34b766783cad1e44eb92d18912a58adbda3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 11 09:22:17 compute-0 podman[108178]: 2025-12-11 09:22:17.669737241 +0000 UTC m=+0.026430806 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:22:17 compute-0 podman[108178]: 2025-12-11 09:22:17.778739002 +0000 UTC m=+0.135432547 container init 82f8a46b38d8f6f47b78998db6c31e41520fa8c2ccea2fac521c3b13d6219018 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:22:17 compute-0 podman[108178]: 2025-12-11 09:22:17.787493745 +0000 UTC m=+0.144187300 container start 82f8a46b38d8f6f47b78998db6c31e41520fa8c2ccea2fac521c3b13d6219018 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_hodgkin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec 11 09:22:17 compute-0 podman[108178]: 2025-12-11 09:22:17.79151333 +0000 UTC m=+0.148206885 container attach 82f8a46b38d8f6f47b78998db6c31e41520fa8c2ccea2fac521c3b13d6219018 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_hodgkin, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:22:17 compute-0 sudo[108199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 11 09:22:17 compute-0 sudo[108199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:22:17 compute-0 sudo[108199]: pam_unix(sudo:session): session closed for user root
Dec 11 09:22:18 compute-0 clever_hodgkin[108194]: --> passed data devices: 0 physical, 1 LVM
Dec 11 09:22:18 compute-0 clever_hodgkin[108194]: --> All data devices are unavailable
Dec 11 09:22:18 compute-0 systemd[1]: libpod-82f8a46b38d8f6f47b78998db6c31e41520fa8c2ccea2fac521c3b13d6219018.scope: Deactivated successfully.
Dec 11 09:22:18 compute-0 podman[108178]: 2025-12-11 09:22:18.186016289 +0000 UTC m=+0.542709834 container died 82f8a46b38d8f6f47b78998db6c31e41520fa8c2ccea2fac521c3b13d6219018 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Dec 11 09:22:18 compute-0 ceph-mon[74426]: pgmap v149: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec 11 09:22:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-1482ee56463ad31f89e0a5db0448f34b766783cad1e44eb92d18912a58adbda3-merged.mount: Deactivated successfully.
Dec 11 09:22:18 compute-0 podman[108178]: 2025-12-11 09:22:18.319452562 +0000 UTC m=+0.676146097 container remove 82f8a46b38d8f6f47b78998db6c31e41520fa8c2ccea2fac521c3b13d6219018 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_hodgkin, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 11 09:22:18 compute-0 systemd[1]: libpod-conmon-82f8a46b38d8f6f47b78998db6c31e41520fa8c2ccea2fac521c3b13d6219018.scope: Deactivated successfully.
Dec 11 09:22:18 compute-0 sudo[108076]: pam_unix(sudo:session): session closed for user root
Dec 11 09:22:18 compute-0 sudo[108248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:22:18 compute-0 sudo[108248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:22:18 compute-0 sudo[108248]: pam_unix(sudo:session): session closed for user root
Dec 11 09:22:18 compute-0 sudo[108273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -- lvm list --format json
Dec 11 09:22:18 compute-0 sudo[108273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:22:19 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:19 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9fc002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:19 compute-0 podman[108340]: 2025-12-11 09:22:18.920969639 +0000 UTC m=+0.027440387 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:22:19 compute-0 podman[108340]: 2025-12-11 09:22:19.181497867 +0000 UTC m=+0.287968595 container create 396a2b6b5e2324ae2ade958558f092c53eba4d999d3eb2421e5b8587226ec0bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_ritchie, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 11 09:22:19 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v150: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Dec 11 09:22:19 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:22:19 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000031s ======
Dec 11 09:22:19 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:22:19.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 11 09:22:19 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:19 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9fc002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:19 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:22:19 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:19 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:22:19.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:19 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:19 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9f0002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:19 compute-0 systemd[1]: Started libpod-conmon-396a2b6b5e2324ae2ade958558f092c53eba4d999d3eb2421e5b8587226ec0bf.scope.
Dec 11 09:22:19 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:22:19 compute-0 ceph-mon[74426]: pgmap v150: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Dec 11 09:22:19 compute-0 podman[108340]: 2025-12-11 09:22:19.620292577 +0000 UTC m=+0.726763325 container init 396a2b6b5e2324ae2ade958558f092c53eba4d999d3eb2421e5b8587226ec0bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 11 09:22:19 compute-0 podman[108340]: 2025-12-11 09:22:19.629492005 +0000 UTC m=+0.735962733 container start 396a2b6b5e2324ae2ade958558f092c53eba4d999d3eb2421e5b8587226ec0bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec 11 09:22:19 compute-0 thirsty_ritchie[108357]: 167 167
Dec 11 09:22:19 compute-0 systemd[1]: libpod-396a2b6b5e2324ae2ade958558f092c53eba4d999d3eb2421e5b8587226ec0bf.scope: Deactivated successfully.
Dec 11 09:22:19 compute-0 podman[108340]: 2025-12-11 09:22:19.651273094 +0000 UTC m=+0.757743822 container attach 396a2b6b5e2324ae2ade958558f092c53eba4d999d3eb2421e5b8587226ec0bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_ritchie, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:22:19 compute-0 podman[108340]: 2025-12-11 09:22:19.652248985 +0000 UTC m=+0.758719703 container died 396a2b6b5e2324ae2ade958558f092c53eba4d999d3eb2421e5b8587226ec0bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Dec 11 09:22:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a64d7f56a11dd1f19078f808660c06ea59c83a93b6979a2f4e3de8638a9211e-merged.mount: Deactivated successfully.
Dec 11 09:22:19 compute-0 podman[108340]: 2025-12-11 09:22:19.774912882 +0000 UTC m=+0.881383620 container remove 396a2b6b5e2324ae2ade958558f092c53eba4d999d3eb2421e5b8587226ec0bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_ritchie, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec 11 09:22:19 compute-0 systemd[1]: libpod-conmon-396a2b6b5e2324ae2ade958558f092c53eba4d999d3eb2421e5b8587226ec0bf.scope: Deactivated successfully.
Dec 11 09:22:19 compute-0 podman[108382]: 2025-12-11 09:22:19.935700919 +0000 UTC m=+0.044855291 container create e42945cceeb421cbc97eb0d3191d88cbf1d4a812b675ecbd90a4d51c632f2735 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_yonath, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Dec 11 09:22:19 compute-0 systemd[1]: Started libpod-conmon-e42945cceeb421cbc97eb0d3191d88cbf1d4a812b675ecbd90a4d51c632f2735.scope.
Dec 11 09:22:20 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:22:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c765df8f05e2595e489c17d600ec61ea9ef31b5b983efacb3d28b23adaa93ba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 11 09:22:20 compute-0 podman[108382]: 2025-12-11 09:22:19.917043236 +0000 UTC m=+0.026197638 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:22:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c765df8f05e2595e489c17d600ec61ea9ef31b5b983efacb3d28b23adaa93ba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:22:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c765df8f05e2595e489c17d600ec61ea9ef31b5b983efacb3d28b23adaa93ba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:22:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c765df8f05e2595e489c17d600ec61ea9ef31b5b983efacb3d28b23adaa93ba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 11 09:22:20 compute-0 podman[108382]: 2025-12-11 09:22:20.096593508 +0000 UTC m=+0.205747900 container init e42945cceeb421cbc97eb0d3191d88cbf1d4a812b675ecbd90a4d51c632f2735 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_yonath, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 11 09:22:20 compute-0 podman[108382]: 2025-12-11 09:22:20.106023103 +0000 UTC m=+0.215177485 container start e42945cceeb421cbc97eb0d3191d88cbf1d4a812b675ecbd90a4d51c632f2735 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_yonath, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:22:20 compute-0 podman[108382]: 2025-12-11 09:22:20.157962673 +0000 UTC m=+0.267117115 container attach e42945cceeb421cbc97eb0d3191d88cbf1d4a812b675ecbd90a4d51c632f2735 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_yonath, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 11 09:22:20 compute-0 strange_yonath[108398]: {
Dec 11 09:22:20 compute-0 strange_yonath[108398]:     "1": [
Dec 11 09:22:20 compute-0 strange_yonath[108398]:         {
Dec 11 09:22:20 compute-0 strange_yonath[108398]:             "devices": [
Dec 11 09:22:20 compute-0 strange_yonath[108398]:                 "/dev/loop3"
Dec 11 09:22:20 compute-0 strange_yonath[108398]:             ],
Dec 11 09:22:20 compute-0 strange_yonath[108398]:             "lv_name": "ceph_lv0",
Dec 11 09:22:20 compute-0 strange_yonath[108398]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 11 09:22:20 compute-0 strange_yonath[108398]:             "lv_size": "21470642176",
Dec 11 09:22:20 compute-0 strange_yonath[108398]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TIjPv6-JN4c-U1Hf-F3k1-2mgj-CwGi-pbjIHh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=865a308b-cdc3-4034-b5eb-feb596b462bf,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 11 09:22:20 compute-0 strange_yonath[108398]:             "lv_uuid": "TIjPv6-JN4c-U1Hf-F3k1-2mgj-CwGi-pbjIHh",
Dec 11 09:22:20 compute-0 strange_yonath[108398]:             "name": "ceph_lv0",
Dec 11 09:22:20 compute-0 strange_yonath[108398]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 11 09:22:20 compute-0 strange_yonath[108398]:             "tags": {
Dec 11 09:22:20 compute-0 strange_yonath[108398]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 11 09:22:20 compute-0 strange_yonath[108398]:                 "ceph.block_uuid": "TIjPv6-JN4c-U1Hf-F3k1-2mgj-CwGi-pbjIHh",
Dec 11 09:22:20 compute-0 strange_yonath[108398]:                 "ceph.cephx_lockbox_secret": "",
Dec 11 09:22:20 compute-0 strange_yonath[108398]:                 "ceph.cluster_fsid": "31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060",
Dec 11 09:22:20 compute-0 strange_yonath[108398]:                 "ceph.cluster_name": "ceph",
Dec 11 09:22:20 compute-0 strange_yonath[108398]:                 "ceph.crush_device_class": "",
Dec 11 09:22:20 compute-0 strange_yonath[108398]:                 "ceph.encrypted": "0",
Dec 11 09:22:20 compute-0 strange_yonath[108398]:                 "ceph.osd_fsid": "865a308b-cdc3-4034-b5eb-feb596b462bf",
Dec 11 09:22:20 compute-0 strange_yonath[108398]:                 "ceph.osd_id": "1",
Dec 11 09:22:20 compute-0 strange_yonath[108398]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 11 09:22:20 compute-0 strange_yonath[108398]:                 "ceph.type": "block",
Dec 11 09:22:20 compute-0 strange_yonath[108398]:                 "ceph.vdo": "0",
Dec 11 09:22:20 compute-0 strange_yonath[108398]:                 "ceph.with_tpm": "0"
Dec 11 09:22:20 compute-0 strange_yonath[108398]:             },
Dec 11 09:22:20 compute-0 strange_yonath[108398]:             "type": "block",
Dec 11 09:22:20 compute-0 strange_yonath[108398]:             "vg_name": "ceph_vg0"
Dec 11 09:22:20 compute-0 strange_yonath[108398]:         }
Dec 11 09:22:20 compute-0 strange_yonath[108398]:     ]
Dec 11 09:22:20 compute-0 strange_yonath[108398]: }
Dec 11 09:22:20 compute-0 systemd[1]: libpod-e42945cceeb421cbc97eb0d3191d88cbf1d4a812b675ecbd90a4d51c632f2735.scope: Deactivated successfully.
Dec 11 09:22:20 compute-0 podman[108382]: 2025-12-11 09:22:20.43993722 +0000 UTC m=+0.549091582 container died e42945cceeb421cbc97eb0d3191d88cbf1d4a812b675ecbd90a4d51c632f2735 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 11 09:22:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c765df8f05e2595e489c17d600ec61ea9ef31b5b983efacb3d28b23adaa93ba-merged.mount: Deactivated successfully.
Dec 11 09:22:20 compute-0 podman[108382]: 2025-12-11 09:22:20.729170454 +0000 UTC m=+0.838324826 container remove e42945cceeb421cbc97eb0d3191d88cbf1d4a812b675ecbd90a4d51c632f2735 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_yonath, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:22:20 compute-0 sudo[108273]: pam_unix(sudo:session): session closed for user root
Dec 11 09:22:20 compute-0 systemd[1]: libpod-conmon-e42945cceeb421cbc97eb0d3191d88cbf1d4a812b675ecbd90a4d51c632f2735.scope: Deactivated successfully.
Dec 11 09:22:20 compute-0 sudo[108421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:22:20 compute-0 sudo[108421]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:22:20 compute-0 sudo[108421]: pam_unix(sudo:session): session closed for user root
Dec 11 09:22:20 compute-0 sudo[108446]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -- raw list --format json
Dec 11 09:22:20 compute-0 sudo[108446]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:22:21 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:21 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fda0c001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:21 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v151: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:22:21 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:22:21 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:21 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:22:21.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:21 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:21 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9f8002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:21 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:22:21 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000031s ======
Dec 11 09:22:21 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:22:21.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 11 09:22:21 compute-0 podman[108512]: 2025-12-11 09:22:21.474469108 +0000 UTC m=+0.115850676 container create a6c3af1569dd77fdaade1bba809d380df7253223c2713723e6961705e2f3095d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_wilson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:22:21 compute-0 podman[108512]: 2025-12-11 09:22:21.385785321 +0000 UTC m=+0.027166909 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:22:21 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:22:21 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:21 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9fc002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:21 compute-0 systemd[1]: Started libpod-conmon-a6c3af1569dd77fdaade1bba809d380df7253223c2713723e6961705e2f3095d.scope.
Dec 11 09:22:21 compute-0 ceph-mon[74426]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Dec 11 09:22:21 compute-0 ceph-mon[74426]: rocksdb: (Original Log Time 2025/12/11-09:22:21.604035) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 11 09:22:21 compute-0 ceph-mon[74426]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Dec 11 09:22:21 compute-0 ceph-mon[74426]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765444941604215, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 1712, "num_deletes": 252, "total_data_size": 3636959, "memory_usage": 3679464, "flush_reason": "Manual Compaction"}
Dec 11 09:22:21 compute-0 ceph-mon[74426]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Dec 11 09:22:21 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:22:21 compute-0 ceph-mon[74426]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765444941722712, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 2217112, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10777, "largest_seqno": 12488, "table_properties": {"data_size": 2211440, "index_size": 2808, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14336, "raw_average_key_size": 20, "raw_value_size": 2199055, "raw_average_value_size": 3084, "num_data_blocks": 125, "num_entries": 713, "num_filter_entries": 713, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765444764, "oldest_key_time": 1765444764, "file_creation_time": 1765444941, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1ac8294-d02a-459a-8058-7f05c4f78e7d", "db_session_id": "U2WUY0PZPH8WS8N5I572", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Dec 11 09:22:21 compute-0 ceph-mon[74426]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 118733 microseconds, and 9449 cpu microseconds.
Dec 11 09:22:21 compute-0 ceph-mon[74426]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 11 09:22:21 compute-0 ceph-mon[74426]: rocksdb: (Original Log Time 2025/12/11-09:22:21.722804) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 2217112 bytes OK
Dec 11 09:22:21 compute-0 ceph-mon[74426]: rocksdb: (Original Log Time 2025/12/11-09:22:21.722847) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Dec 11 09:22:21 compute-0 ceph-mon[74426]: rocksdb: (Original Log Time 2025/12/11-09:22:21.977484) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Dec 11 09:22:21 compute-0 ceph-mon[74426]: rocksdb: (Original Log Time 2025/12/11-09:22:21.977540) EVENT_LOG_v1 {"time_micros": 1765444941977530, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 11 09:22:21 compute-0 ceph-mon[74426]: rocksdb: (Original Log Time 2025/12/11-09:22:21.977565) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 11 09:22:21 compute-0 ceph-mon[74426]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3629895, prev total WAL file size 3630159, number of live WAL files 2.
Dec 11 09:22:21 compute-0 ceph-mon[74426]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 11 09:22:21 compute-0 podman[108512]: 2025-12-11 09:22:21.980197846 +0000 UTC m=+0.621579424 container init a6c3af1569dd77fdaade1bba809d380df7253223c2713723e6961705e2f3095d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec 11 09:22:21 compute-0 ceph-mon[74426]: rocksdb: (Original Log Time 2025/12/11-09:22:21.980382) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323533' seq:0, type:0; will stop at (end)
Dec 11 09:22:21 compute-0 ceph-mon[74426]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 11 09:22:21 compute-0 ceph-mon[74426]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(2165KB)], [26(13MB)]
Dec 11 09:22:21 compute-0 ceph-mon[74426]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765444941980664, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 16392474, "oldest_snapshot_seqno": -1}
Dec 11 09:22:21 compute-0 podman[108512]: 2025-12-11 09:22:21.989123435 +0000 UTC m=+0.630504993 container start a6c3af1569dd77fdaade1bba809d380df7253223c2713723e6961705e2f3095d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec 11 09:22:21 compute-0 podman[108512]: 2025-12-11 09:22:21.993861122 +0000 UTC m=+0.635242700 container attach a6c3af1569dd77fdaade1bba809d380df7253223c2713723e6961705e2f3095d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:22:21 compute-0 intelligent_wilson[108528]: 167 167
Dec 11 09:22:21 compute-0 systemd[1]: libpod-a6c3af1569dd77fdaade1bba809d380df7253223c2713723e6961705e2f3095d.scope: Deactivated successfully.
Dec 11 09:22:21 compute-0 podman[108512]: 2025-12-11 09:22:21.99667881 +0000 UTC m=+0.638060378 container died a6c3af1569dd77fdaade1bba809d380df7253223c2713723e6961705e2f3095d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_wilson, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:22:22 compute-0 ceph-mon[74426]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 4305 keys, 14581696 bytes, temperature: kUnknown
Dec 11 09:22:22 compute-0 ceph-mon[74426]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765444942340675, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 14581696, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14548643, "index_size": 21167, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10821, "raw_key_size": 108849, "raw_average_key_size": 25, "raw_value_size": 14465880, "raw_average_value_size": 3360, "num_data_blocks": 908, "num_entries": 4305, "num_filter_entries": 4305, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765444346, "oldest_key_time": 0, "file_creation_time": 1765444941, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1ac8294-d02a-459a-8058-7f05c4f78e7d", "db_session_id": "U2WUY0PZPH8WS8N5I572", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Dec 11 09:22:22 compute-0 ceph-mon[74426]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 11 09:22:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-20c5473dc82cc40a836fdc6b916839825e8517c05b997fe00510a8d921abfd9e-merged.mount: Deactivated successfully.
Dec 11 09:22:22 compute-0 ceph-mon[74426]: rocksdb: (Original Log Time 2025/12/11-09:22:22.341105) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 14581696 bytes
Dec 11 09:22:22 compute-0 ceph-mon[74426]: rocksdb: (Original Log Time 2025/12/11-09:22:22.343080) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 45.5 rd, 40.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 13.5 +0.0 blob) out(13.9 +0.0 blob), read-write-amplify(14.0) write-amplify(6.6) OK, records in: 4743, records dropped: 438 output_compression: NoCompression
Dec 11 09:22:22 compute-0 ceph-mon[74426]: rocksdb: (Original Log Time 2025/12/11-09:22:22.343197) EVENT_LOG_v1 {"time_micros": 1765444942343095, "job": 10, "event": "compaction_finished", "compaction_time_micros": 360124, "compaction_time_cpu_micros": 70660, "output_level": 6, "num_output_files": 1, "total_output_size": 14581696, "num_input_records": 4743, "num_output_records": 4305, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 11 09:22:22 compute-0 ceph-mon[74426]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 11 09:22:22 compute-0 ceph-mon[74426]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765444942345698, "job": 10, "event": "table_file_deletion", "file_number": 28}
Dec 11 09:22:22 compute-0 ceph-mon[74426]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 11 09:22:22 compute-0 ceph-mon[74426]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765444942348485, "job": 10, "event": "table_file_deletion", "file_number": 26}
Dec 11 09:22:22 compute-0 ceph-mon[74426]: rocksdb: (Original Log Time 2025/12/11-09:22:21.980076) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 11 09:22:22 compute-0 ceph-mon[74426]: rocksdb: (Original Log Time 2025/12/11-09:22:22.348547) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 11 09:22:22 compute-0 ceph-mon[74426]: rocksdb: (Original Log Time 2025/12/11-09:22:22.348552) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 11 09:22:22 compute-0 ceph-mon[74426]: rocksdb: (Original Log Time 2025/12/11-09:22:22.348554) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 11 09:22:22 compute-0 ceph-mon[74426]: rocksdb: (Original Log Time 2025/12/11-09:22:22.348556) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 11 09:22:22 compute-0 ceph-mon[74426]: rocksdb: (Original Log Time 2025/12/11-09:22:22.348558) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 11 09:22:22 compute-0 ceph-mon[74426]: pgmap v151: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:22:22 compute-0 podman[108512]: 2025-12-11 09:22:22.390834689 +0000 UTC m=+1.032216247 container remove a6c3af1569dd77fdaade1bba809d380df7253223c2713723e6961705e2f3095d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_wilson, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec 11 09:22:22 compute-0 systemd[1]: libpod-conmon-a6c3af1569dd77fdaade1bba809d380df7253223c2713723e6961705e2f3095d.scope: Deactivated successfully.
Dec 11 09:22:22 compute-0 podman[108554]: 2025-12-11 09:22:22.54471112 +0000 UTC m=+0.028643125 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:22:22 compute-0 podman[108554]: 2025-12-11 09:22:22.659079698 +0000 UTC m=+0.143011683 container create 40aeb0273b59b0dc1532a0d172c5bb0568867ee50e97a5a5dcaf8ab10667ac92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_jackson, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:22:22 compute-0 systemd[1]: Started libpod-conmon-40aeb0273b59b0dc1532a0d172c5bb0568867ee50e97a5a5dcaf8ab10667ac92.scope.
Dec 11 09:22:22 compute-0 systemd[1]: Started libcrun container.
Dec 11 09:22:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96a09c9ecd1c8eaddf55b0068f0ba4018764af4272cd3481c9f6ded5668bf4dc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 11 09:22:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96a09c9ecd1c8eaddf55b0068f0ba4018764af4272cd3481c9f6ded5668bf4dc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:22:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96a09c9ecd1c8eaddf55b0068f0ba4018764af4272cd3481c9f6ded5668bf4dc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:22:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96a09c9ecd1c8eaddf55b0068f0ba4018764af4272cd3481c9f6ded5668bf4dc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 11 09:22:22 compute-0 podman[108554]: 2025-12-11 09:22:22.919262955 +0000 UTC m=+0.403194960 container init 40aeb0273b59b0dc1532a0d172c5bb0568867ee50e97a5a5dcaf8ab10667ac92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_jackson, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:22:22 compute-0 podman[108554]: 2025-12-11 09:22:22.92776181 +0000 UTC m=+0.411693795 container start 40aeb0273b59b0dc1532a0d172c5bb0568867ee50e97a5a5dcaf8ab10667ac92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:22:23 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:23 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9f0002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:23 compute-0 ceph-mgr[74715]: [balancer INFO root] Optimize plan auto_2025-12-11_09:22:23
Dec 11 09:22:23 compute-0 ceph-mgr[74715]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 11 09:22:23 compute-0 ceph-mgr[74715]: [balancer INFO root] do_upmap
Dec 11 09:22:23 compute-0 ceph-mgr[74715]: [balancer INFO root] pools ['backups', '.mgr', 'volumes', 'vms', 'cephfs.cephfs.data', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.meta', '.rgw.root', 'images', 'default.rgw.log', '.nfs']
Dec 11 09:22:23 compute-0 ceph-mgr[74715]: [balancer INFO root] prepared 0/10 upmap changes
Dec 11 09:22:23 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v152: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:22:23 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:22:23 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.002000063s ======
Dec 11 09:22:23 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:22:23.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000063s
Dec 11 09:22:23 compute-0 podman[108554]: 2025-12-11 09:22:23.275774878 +0000 UTC m=+0.759706883 container attach 40aeb0273b59b0dc1532a0d172c5bb0568867ee50e97a5a5dcaf8ab10667ac92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:22:23 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 11 09:22:23 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 11 09:22:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] _maybe_adjust
Dec 11 09:22:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 11 09:22:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 11 09:22:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 11 09:22:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 11 09:22:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 11 09:22:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 11 09:22:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 11 09:22:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 11 09:22:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 11 09:22:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 11 09:22:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 11 09:22:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 11 09:22:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 11 09:22:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 11 09:22:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 11 09:22:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 11 09:22:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 11 09:22:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 11 09:22:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 11 09:22:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 11 09:22:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 11 09:22:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 11 09:22:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 11 09:22:23 compute-0 ceph-mgr[74715]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 11 09:22:23 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:23 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fda0c002700 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:23 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] scanning for idle connections..
Dec 11 09:22:23 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] cleaning up connections: []
Dec 11 09:22:23 compute-0 ceph-mgr[74715]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 11 09:22:23 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 11 09:22:23 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 11 09:22:23 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 11 09:22:23 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 11 09:22:23 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] scanning for idle connections..
Dec 11 09:22:23 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] cleaning up connections: []
Dec 11 09:22:23 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:22:23 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 11 09:22:23 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:22:23.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 11 09:22:23 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] scanning for idle connections..
Dec 11 09:22:23 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] cleaning up connections: []
Dec 11 09:22:23 compute-0 ceph-mgr[74715]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 11 09:22:23 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 11 09:22:23 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 11 09:22:23 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 11 09:22:23 compute-0 ceph-mgr[74715]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 11 09:22:23 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:23 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9f8002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:23 compute-0 lvm[108647]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 11 09:22:23 compute-0 lvm[108647]: VG ceph_vg0 finished
Dec 11 09:22:23 compute-0 hopeful_jackson[108571]: {}
Dec 11 09:22:23 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 11 09:22:23 compute-0 systemd[1]: libpod-40aeb0273b59b0dc1532a0d172c5bb0568867ee50e97a5a5dcaf8ab10667ac92.scope: Deactivated successfully.
Dec 11 09:22:23 compute-0 systemd[1]: libpod-40aeb0273b59b0dc1532a0d172c5bb0568867ee50e97a5a5dcaf8ab10667ac92.scope: Consumed 1.278s CPU time.
Dec 11 09:22:23 compute-0 podman[108554]: 2025-12-11 09:22:23.693237973 +0000 UTC m=+1.177169958 container died 40aeb0273b59b0dc1532a0d172c5bb0568867ee50e97a5a5dcaf8ab10667ac92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_jackson, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:22:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-96a09c9ecd1c8eaddf55b0068f0ba4018764af4272cd3481c9f6ded5668bf4dc-merged.mount: Deactivated successfully.
Dec 11 09:22:24 compute-0 podman[108554]: 2025-12-11 09:22:24.415221588 +0000 UTC m=+1.899153593 container remove 40aeb0273b59b0dc1532a0d172c5bb0568867ee50e97a5a5dcaf8ab10667ac92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_jackson, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:22:24 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: ::ffff:192.168.122.100 - - [11/Dec/2025:09:22:24] "GET /metrics HTTP/1.1" 200 48275 "" "Prometheus/2.51.0"
Dec 11 09:22:24 compute-0 ceph-mgr[74715]: [prometheus INFO cherrypy.access.139680037156848] ::ffff:192.168.122.100 - - [11/Dec/2025:09:22:24] "GET /metrics HTTP/1.1" 200 48275 "" "Prometheus/2.51.0"
Dec 11 09:22:24 compute-0 systemd[1]: libpod-conmon-40aeb0273b59b0dc1532a0d172c5bb0568867ee50e97a5a5dcaf8ab10667ac92.scope: Deactivated successfully.
Dec 11 09:22:24 compute-0 sudo[108446]: pam_unix(sudo:session): session closed for user root
Dec 11 09:22:24 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 11 09:22:24 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:22:24 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 11 09:22:24 compute-0 ceph-mon[74426]: pgmap v152: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:22:25 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:25 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9fc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:25 compute-0 ceph-mon[74426]: log_channel(audit) log [INF] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:22:25 compute-0 sudo[108665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 11 09:22:25 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v153: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 11 09:22:25 compute-0 sudo[108665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:22:25 compute-0 sudo[108665]: pam_unix(sudo:session): session closed for user root
Dec 11 09:22:25 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:22:25 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:25 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:22:25.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:25 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:25 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9f0002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:25 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:22:25 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:25 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:22:25.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:25 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:25 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fda0c002700 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:26 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:22:26 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:22:26 compute-0 ceph-mon[74426]: pgmap v153: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 11 09:22:26 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:22:27 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:27 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9f8003c60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:27 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v154: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:22:27 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:22:27 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000031s ======
Dec 11 09:22:27 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:22:27.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 11 09:22:27 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:27 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9fc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:27 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:22:27 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:27 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:22:27.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:27 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:27 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9f0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:28 compute-0 ceph-mon[74426]: pgmap v154: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:22:28 compute-0 ceph-mon[74426]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 11 09:22:28 compute-0 ceph-mon[74426]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Cumulative writes: 2651 writes, 12K keys, 2650 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s
                                           Cumulative WAL: 2651 writes, 2650 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2651 writes, 12K keys, 2650 commit groups, 1.0 writes per commit group, ingest: 24.17 MB, 0.04 MB/s
                                           Interval WAL: 2651 writes, 2650 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     51.4      0.40              0.07         5    0.079       0      0       0.0       0.0
                                             L6      1/0   13.91 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   2.5     83.0     73.8      0.71              0.20         4    0.177     17K   1788       0.0       0.0
                                            Sum      1/0   13.91 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.5     53.1     65.7      1.10              0.27         9    0.123     17K   1788       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.6     58.8     72.7      1.00              0.27         8    0.124     17K   1788       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   0.0     83.0     73.8      0.71              0.20         4    0.177     17K   1788       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     70.3      0.29              0.07         4    0.072       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.11              0.00         1    0.107       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.020, interval 0.020
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.07 GB write, 0.12 MB/s write, 0.06 GB read, 0.10 MB/s read, 1.1 seconds
                                           Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.06 GB read, 0.10 MB/s read, 1.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5578c10a7350#2 capacity: 304.00 MB usage: 1.35 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 0.000118 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(88,1.17 MB,0.386133%) FilterBlock(10,56.23 KB,0.0180646%) IndexBlock(10,119.98 KB,0.0385435%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 11 09:22:29 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:29 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fda0c002700 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:29 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v155: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 11 09:22:29 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:22:29 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000031s ======
Dec 11 09:22:29 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:22:29.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 11 09:22:29 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:29 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9f8003c60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:29 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:22:29 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000031s ======
Dec 11 09:22:29 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:22:29.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 11 09:22:29 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:29 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9fc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:29 compute-0 ceph-mon[74426]: pgmap v155: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 11 09:22:31 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:31 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9f0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:31 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v156: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:22:31 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:22:31 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:31 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:22:31.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:31 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:31 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fda0c003800 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:31 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:22:31 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:31 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:22:31.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:31 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:31 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9f8003c60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:31 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:22:32 compute-0 ceph-mon[74426]: pgmap v156: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:22:33 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:33 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9fc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:33 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v157: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:22:33 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:22:33 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:33 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:22:33.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:33 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:33 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9f0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:33 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:22:33 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 11 09:22:33 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:22:33.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 11 09:22:33 compute-0 ceph-mon[74426]: pgmap v157: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:22:33 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:33 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9f8003c60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:34 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: ::ffff:192.168.122.100 - - [11/Dec/2025:09:22:34] "GET /metrics HTTP/1.1" 200 48275 "" "Prometheus/2.51.0"
Dec 11 09:22:34 compute-0 ceph-mgr[74715]: [prometheus INFO cherrypy.access.139680037156848] ::ffff:192.168.122.100 - - [11/Dec/2025:09:22:34] "GET /metrics HTTP/1.1" 200 48275 "" "Prometheus/2.51.0"
Dec 11 09:22:35 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:35 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fda0c003800 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:35 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v158: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 11 09:22:35 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:22:35 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:35 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:22:35.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:35 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:35 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9fc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:35 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:22:35 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 11 09:22:35 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:22:35.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 11 09:22:35 compute-0 ceph-mon[74426]: pgmap v158: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 11 09:22:35 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:35 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9f0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:36 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:22:37 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:37 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9f8003c60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:37 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v159: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:22:37 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:22:37 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:37 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:22:37.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:37 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:37 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fda0c003800 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:37 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:22:37 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:37 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:22:37.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:37 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:37 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9fc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:38 compute-0 sudo[108702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 11 09:22:38 compute-0 sudo[108702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:22:38 compute-0 sudo[108702]: pam_unix(sudo:session): session closed for user root
Dec 11 09:22:38 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 11 09:22:38 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 11 09:22:38 compute-0 ceph-mon[74426]: pgmap v159: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:22:38 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 11 09:22:39 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:39 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9fc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:39 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v160: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 11 09:22:39 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:22:39 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:39 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:22:39.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:39 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:39 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9f8003c60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:39 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:22:39 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000031s ======
Dec 11 09:22:39 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:22:39.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 11 09:22:39 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:39 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9e8000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:40 compute-0 ceph-mon[74426]: pgmap v160: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 11 09:22:41 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:41 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9f4000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:41 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v161: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:22:41 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:22:41 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:41 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:22:41.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:41 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:41 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9f0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:41 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:22:41 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:41 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:22:41.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:41 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:22:41 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:41 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9f8003c60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:42 compute-0 ceph-mon[74426]: pgmap v161: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:22:43 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:43 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9e80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:43 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v162: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:22:43 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:22:43 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:43 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:22:43.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:43 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:43 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9f40016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:43 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:22:43 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000031s ======
Dec 11 09:22:43 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:22:43.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 11 09:22:43 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:43 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9f0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:43 compute-0 sshd-session[108735]: Accepted publickey for zuul from 192.168.122.10 port 52414 ssh2: ECDSA SHA256:cT48FffNzE3FSGRebchaTzw3hIqcCIoBfXY30Q2C9bc
Dec 11 09:22:43 compute-0 systemd-logind[792]: New session 38 of user zuul.
Dec 11 09:22:43 compute-0 systemd[1]: Started Session 38 of User zuul.
Dec 11 09:22:43 compute-0 sshd-session[108735]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 11 09:22:43 compute-0 sudo[108739]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Dec 11 09:22:43 compute-0 sudo[108739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:22:44 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: ::ffff:192.168.122.100 - - [11/Dec/2025:09:22:44] "GET /metrics HTTP/1.1" 200 48275 "" "Prometheus/2.51.0"
Dec 11 09:22:44 compute-0 ceph-mgr[74715]: [prometheus INFO cherrypy.access.139680037156848] ::ffff:192.168.122.100 - - [11/Dec/2025:09:22:44] "GET /metrics HTTP/1.1" 200 48275 "" "Prometheus/2.51.0"
Dec 11 09:22:44 compute-0 ceph-mon[74426]: pgmap v162: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:22:45 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:45 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9f8003c60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:45 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v163: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 11 09:22:45 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:22:45 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000031s ======
Dec 11 09:22:45 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:22:45.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 11 09:22:45 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:45 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9e80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:45 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:22:45 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000031s ======
Dec 11 09:22:45 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:22:45.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 11 09:22:45 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:45 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9f40016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:46 compute-0 ceph-mon[74426]: pgmap v163: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 11 09:22:46 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:22:47 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:47 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9f0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:47 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v164: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:22:47 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:22:47 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:47 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:22:47.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:47 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:47 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9f8003c60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:47 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:22:47 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:47 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:22:47.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:47 compute-0 ceph-mon[74426]: pgmap v164: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:22:47 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:47 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9e80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:49 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:49 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9f40016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:49 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v165: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 11 09:22:49 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:22:49 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:49 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:22:49.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:49 compute-0 ovs-vsctl[108946]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Dec 11 09:22:49 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:49 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9f0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:49 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:22:49 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:49 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:22:49.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:49 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:49 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9f8003c60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:50 compute-0 ceph-mon[74426]: pgmap v165: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 11 09:22:51 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:51 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9f8003c60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:51 compute-0 lvm[109311]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 11 09:22:51 compute-0 lvm[109311]: VG ceph_vg0 finished
Dec 11 09:22:51 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v166: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:22:51 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:22:51 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:51 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:22:51.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:51 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:51 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9f8003c60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:51 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:22:51 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:51 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:22:51.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:51 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:22:51 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:51 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9f0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:52 compute-0 ceph-mon[74426]: pgmap v166: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:22:52 compute-0 crontab[109741]: (root) LIST (root)
Dec 11 09:22:53 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:53 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9f0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:53 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v167: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:22:53 compute-0 ceph-mon[74426]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 11 09:22:53 compute-0 ceph-mon[74426]: log_channel(audit) log [DBG] : from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 11 09:22:53 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:22:53 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000031s ======
Dec 11 09:22:53 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:22:53.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 11 09:22:53 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] scanning for idle connections..
Dec 11 09:22:53 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] cleaning up connections: []
Dec 11 09:22:53 compute-0 ceph-mon[74426]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 11 09:22:53 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:53 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9e8002c90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:53 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] scanning for idle connections..
Dec 11 09:22:53 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] cleaning up connections: []
Dec 11 09:22:53 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] scanning for idle connections..
Dec 11 09:22:53 compute-0 ceph-mgr[74715]: [volumes INFO mgr_util] cleaning up connections: []
Dec 11 09:22:53 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:22:53 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:53 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:22:53.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:53 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:53 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9f8003c60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:54 compute-0 ceph-mon[74426]: pgmap v167: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:22:54 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-0-wwpcae[74711]: ::ffff:192.168.122.100 - - [11/Dec/2025:09:22:54] "GET /metrics HTTP/1.1" 200 48276 "" "Prometheus/2.51.0"
Dec 11 09:22:54 compute-0 ceph-mgr[74715]: [prometheus INFO cherrypy.access.139680037156848] ::ffff:192.168.122.100 - - [11/Dec/2025:09:22:54] "GET /metrics HTTP/1.1" 200 48276 "" "Prometheus/2.51.0"
Dec 11 09:22:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:55 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9f0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:55 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v168: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 11 09:22:55 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:22:55 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 11 09:22:55 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:22:55.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 11 09:22:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:55 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9f0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:55 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:22:55 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:55 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:22:55.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:55 compute-0 systemd[1]: Starting Hostname Service...
Dec 11 09:22:55 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:55 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9f8003c60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:55 compute-0 systemd[1]: Started Hostname Service.
Dec 11 09:22:56 compute-0 ceph-mon[74426]: pgmap v168: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 11 09:22:56 compute-0 ceph-mon[74426]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:22:56 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-haproxy-nfs-cephfs-compute-0-qtoxfz[95274]: [WARNING] 344/092256 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 11 09:22:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:57 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9e80035b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:57 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v169: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 11 09:22:57 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:22:57 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 11 09:22:57 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:22:57.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 11 09:22:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:57 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9f0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:57 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:22:57 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:57 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:22:57.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:57 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:57 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9f0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:57 compute-0 ceph-mon[74426]: pgmap v169: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 11 09:22:58 compute-0 sudo[110091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 11 09:22:58 compute-0 sudo[110091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:22:58 compute-0 sudo[110091]: pam_unix(sudo:session): session closed for user root
Dec 11 09:22:59 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:59 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9f8003c60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:59 compute-0 ceph-mgr[74715]: log_channel(cluster) log [DBG] : pgmap v170: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:22:59 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:22:59 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.001000031s ======
Dec 11 09:22:59 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.100 - anonymous [11/Dec/2025:09:22:59.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 11 09:22:59 compute-0 radosgw[93354]: ====== starting new request req=0x7f36cd2845d0 =====
Dec 11 09:22:59 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:59 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9e80035b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:59 compute-0 radosgw[93354]: ====== req done req=0x7f36cd2845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:59 compute-0 radosgw[93354]: beast: 0x7f36cd2845d0: 192.168.122.102 - anonymous [11/Dec/2025:09:22:59.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:59 compute-0 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-2-0-compute-0-iryjby[107865]: 11/12/2025 09:22:59 : epoch 693a8d31 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9f0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
